1995: Using a Language with a Garbage Collector Will Make You a Bad Programmer
2024: Using AI Generated Code Will Make You a Bad Programmer
1995: Using a Language with a Garbage Collector Will Make You a Bad Programmer
2024: Using AI Generated Code Will Make You a Bad Programmer
I get where this is coming from and it is true sometimes (e.g. my favorite example is Google maps). But it’s quite silly to assume this for all tools and all skill sets, especially with more creative and complex skills like programming.
Wise and experienced practitioners will stay grounded in the fundamentals while judiciously adding new tools to their kit. This requires experimentation and continual learning.
The people whose skills will be impacted the most are those who didn’t have strong fundamentals in the first place, and only know the craft through that tool.
Edit: forgive my frequent edits in the 10 minutes since after initially posting
Both of these remain true to today, which is why we always interview people at one layer below the requirement of the job so they know what they’re doing.
Writing C/C++ - know how the output looks like. Using GC-based languages? Know the cleanup cycle (if any).
I would wager the third also holds true.
Most of SWE (and much of engineering in general) is built on abstractions -- I use a Numpy to do math for me, React to build a UI, or Moment to do date operations. All of these libraries offer abstractions that give me high leverage on a problem in a reliable way.
The issue with the current state of AI tools for code generation is that they don't offer a reliable abstraction, instead the abstraction is the prompt/context, and the reliability can vary quite a bit.
I would feel like one hand it tied behind my back without LLM tools (I use both co-pilot and Gemini daily), however the amount of code I allow these tools to write _for_ me is quite limited. I use these tools to automate small snippets (co-pilot) or help me ideate (Gemini). I wouldn't trust them to write more than a contained function as I don't trust that it'll do what I intend.
So while I think these tools are amazing for increasing productivity, I'm still skeptical of using them at scale to write reliable software, and I'm not sure if the path we are on with them is the right one to get there.
- using a debugger will make you a bad programmer
- using an IDE will make you a bad programmer
- using Google will make you a bad programmer
- using StackOverflow will make you a bad programmer
Hint: It's not the tools, it's how you use them.
If garbage collectors only did the correct thing 90% of the time, and non-deterministically did something stupid the other 10%, then, er, yeah, it very much would!
There's a reason that conservative GCs for C didn't _really_ catch on... (It would be unfair to describe them as as broken as an LLM, but they certainly have their... downsides.)
I can trust that a garbage collector will allocate and cleanup memory correctly.
I cannot trust that an AI will generate quality code. I have to review its output. As someone who has been stuck doing nothing but review other people's code for the last few months, I can confidently say it would take me less time to code the solution myself than to read, digest, provide feedback for, and review changes for someone else's code. If I cannot write the code myself, I cannot accurately review its output. If I can write the code myself, it would be faster (and more fulfilling) to do that than review output from an AI.
Yeah, they are generally probabilistic. That has nothing to do with abstraction. There are good abstractions built on top of probabilistic concepts, like rngs, crypto libraries etc.