Today I increment the integer which most humans on this planet measure their age with. When I was a young whipper-snapper... Two decades ago, I was told that the reason why I couldn't use a calculator while doing my arithmetic homework was that I wouldn't always carry a calculator with me. It made sense at the time, but cometh the two thousand and seventh year of our Lord, people started carrying pocket computers with internet access everywhere they went. To this day, I still perform most of my calculations involving just the four core mathematical functions mentally, even though I'm bad at keeping track of digits and would save me a good amount of effort if I had just keyed it in. Years roll by and the increases in computational speed and the availability of cheap data storage allow megacorporations to train large language models which can accurately mimic human language responses with information that isn't entirely detached from reality. Thus, the equivalent of the four function calculator, but for every academic study is unleashed unto the world. Interestingly, even at this early state of development, these "generative AIs" are quite proficient in writing short code snippets. As academic institutions from around the world struggle to come to terms with what might be a paradigm shift in humanity's future, the instructors from my introductory software engineering course decide to roll with punches oncoming tsunami, allowing full usage of any generative AI for all the coursework. This is only supposed to be a test to measure the differences in student performance, but I can't see how a non AI policy is enforceable. Pandora's box has been opened, but is it all that bad? Even starting from my precalculus class in high school, I was expected to use a graphing calculator, as drawing graphs and finding intersections manually were deemed an inefficient use of time and effort.
For clarification, the course places heavy weighting for a weekly "Workout of the Day", often referred to as WOD. These are all or nothing coding tests designed to be completed within an hour. In preparation for these graded WODs, a practice WOD of similar content is held in class in advance. Near the beginning of the semester, the WODs were mostly concerned with the JavaScript language, where the problems were simple enough where generative AI could return passing code on the first try. However, since it was only practice, I would try to get to the solution without using it until I got stuck or the deadline approached, where in a matter of seconds, ChatGPT3.5 presents me a solution which might take me a few tens of minutes to consider. Later in the semester, the course material progressed faster than my proficiency, leading me to start using ChatGPT as an "I win" button until the focus of the class started turning to frameworks, where it was less useful.
I tried not to use generative AI at the beginning of the semester. I managed to keep it up for a few weeks until I got stuck one day and spent 20 minutes signing up for a [ChatGPT account](https://openai.com/chatgpt). My behavior eventually defaulting to starting off with ChatGPT and spend the rest of the time studying the structure of the code. Usage stopped after WODs shifted focus to frameworks, since they started having more granular instructions. There would be multiple files involved, so the prompts for ChatGPT would be super long and I don't trust it to keep internal consistency throughout its solution.
In addition to the in class WODs, the class has take-home WODs to familiarize students with languages, functions, frameworks, and services. I feel like the use of generative AI isn't even applicable to the ones outside the JavaScript exercises.
Although ChatGPT is most infamous in its use by students who use it to write essays for them, I have yet to use it to create anything more than barebone outlines for anything. Its prose reminds me of that of a middle schooler and is incapable of replicating my sarcasm.
I have yet to use generative AI for the final group project. ChatGPT's database is intentionally kept a few years behind the present, so it lacks detailed information regarding the latest versions of Bootstrap-React.
I don't think I've done anything to learn concepts using generative AI other than to (re)learn the syntax for arrow functions with JavaScript.
I believe that the questions brought up in class and in the class's Discord server are niche enough for generative AI not to have the ability to solve.
Given the fact that most of the questions in the smart-questions channel are for help with troubleshooting npm problems, I'm not sure how much generative AI would help, though I have yet to try to use one for that purpose.
I didn't explicitly ask for examples, though ChatGPT does usually create a sample or two while explaining a concept.
Occasionally I do copy and paste a line or two of code I don't quite understand to see whether it can understand the context, and it usually does.
I don't think I've used generative AI to write any code aside of WODs.
I have yet to use generative AI to write comments and documentation, though I have the feeling it should do fairly well.
I avoid getting ChatGPT to do fine corrections as it tends to mutate existing code in unnoticeable yet inconvenient places as prompts get bigger.
I don't think I use generative AI for the purposes of this class other than the above.
I believe that the existence of generative AI significantly lowered the skill floor required to score well in this class. Because of the all or nothing nature of the weekly WODs, there is pressure for students to alpha strike starting with a generative AI instead of working through the problems themselves. While I believe that there wouldn't be much difference in the long term, the lack of repetition of simple exercises in this class doesn't allow for heuristic learning. People can learn to touch type just by using a keyboard and pecking each key enough to remember where it is and people can learn single digit multiplication by punching them into a calculator enough times, but it's unlikely one can learn single digit multiplication after finding a few products of four-digit numbers. And like a calculator, large language models are black boxed enough where users don't need to know what steps happen to produce a result, leading to users not needing to practice to build proficiency.
I believe the copy-paste potential of generative AI is inversely proportional with the size of the codebase, as the current token limitations already cause problems with only a few hundred lines of code across a few files. For codebases that contain hundreds of directories and thousands of files, I'm not sure whether it takes less time to correct errors than to start writing from scratch. I am not sure whether future development will allow for good scaling of contextual information.
I think the existence of generative AI has severe implications for the job market. While I don't think humanity would manage to create a general intelligence in this century, the perception that it is near will cause many companies to forgo positions that can't quite be replaced using generative AI. Similar to the advent of machine translation in the late 90s and early 2000s, many people will be laid off, only for the companies to figure that the new technology isn't quite up to par with expectations and hire those same people to fix all the problems in the machine output, but at a much lower salary. In other respects, the advent of generative AI will be like that of the camera and Adobe Photoshop, where entire swaths of the job market become obsoleted as some services become accomplishable by almost anyone.
There also is a concerning trend that the big players that actively research generative AI are extremely secretive concerning their development techniques. It is quite possible that they will never allow software licenses of their models for local execution, only subscriptions that allow for polling models running on corporate servers. The power to selectively ban users from using their services can give these companies great power.
Most concerningly, it seems that generative AI is taking the roles of those who create things. Visual art, music, and writing, activities which most people consider is the differentiation between human and machine, are already being done in the free market by machines instead of people, since machines can create things very cheaply. Meanwhile, the most menial of jobs such as fast food service and retail work are still being done by humans because robots are expensive.
I don't think generative AI will leave the academic sphere anytime soon. The only methods of preventing its use in the classroom so far are blocking access or draconian surveillance. The former is difficult to enforce as the usefulness of generative AIs are enough for students to consider connecting through locally hosted VPNs or TOR bridges. The latter is a gross violation of ethical standards. If large language models can be locally trained and executed by tech-savvy individuals on their privately owned machines, similar to how image generation models are at the moment, it'll become illogical to prevent its usage. Disallowing its use for computer science would be like disallowing the use of writing utensils for mathematics or telescopes for astronomy.
Generative AIs require a lot of data to train. Instructors and students who want to train a specialized model for a singular class or subject may struggle to create a database for training.
The nature of learning material will probably have to change as generative AIs get better at their function. Simple coding assignments can be done almost entirely without thought or effort, but problems complex enough for them to have trouble solving may result in a steep escalation of difficulty without getting intended lesson through. Assignments might change to be more math-class like, where there are multiple standalone puzzles that are designed to teach process.
I think the overall skill floor to get into all the computer sciences will be lowered if well-trained large language models start being integrated into software. If the ability to diagnose and debug code gets better, people can more efficiently turn ideas into code. This may result in courses that won't need to delve into specific implementations past the basics.