←back to thread

53 points cmpit | 2 comments | | HN request time: 0.521s | source
Show context
artemsokolov ◴[] No.41918225[source]
1972: Using Anything Other Than Assembly Will Make You a Bad Programmer

1995: Using a Language with a Garbage Collector Will Make You a Bad Programmer

2024: Using AI Generated Code Will Make You a Bad Programmer

replies(14): >>41919060 #>>41919523 #>>41919644 #>>41919894 #>>41920479 #>>41920712 #>>41920753 #>>41920815 #>>41920819 #>>41920944 #>>41922549 #>>41923314 #>>41929277 #>>41934480 #
colincooke ◴[] No.41920815[source]
To me the issue with AI generated code, and what is different than prior innovations in software development, is that it is the the wrong abstraction (or one could argue not even an abstraction anymore).

Most of SWE (and much of engineering in general) is built on abstractions -- I use a Numpy to do math for me, React to build a UI, or Moment to do date operations. All of these libraries offer abstractions that give me high leverage on a problem in a reliable way.

The issue with the current state of AI tools for code generation is that they don't offer a reliable abstraction, instead the abstraction is the prompt/context, and the reliability can vary quite a bit.

I would feel like one hand it tied behind my back without LLM tools (I use both co-pilot and Gemini daily), however the amount of code I allow these tools to write _for_ me is quite limited. I use these tools to automate small snippets (co-pilot) or help me ideate (Gemini). I wouldn't trust them to write more than a contained function as I don't trust that it'll do what I intend.

So while I think these tools are amazing for increasing productivity, I'm still skeptical of using them at scale to write reliable software, and I'm not sure if the path we are on with them is the right one to get there.

replies(1): >>41921633 #
danielmarkbruce ◴[] No.41921633[source]
It isn't an abstraction. Not everything is an abstraction. There is a long history of tools which are not abstractions. Linters. Static code analysis. Debuggers. Profiling tools. Autocomplete. IDEs.
replies(1): >>41930457 #
1. SaucyWrong ◴[] No.41930457[source]
I can’t tell if this is an argument against the parent or just a semantic correction. Assuming the former, I’ll point out that every tool classification you’ve mentioned has expected correct and incorrect behavior, and LLM tools…don’t. When LLMs produce incorrect or unexpected results, the refrain is, inevitably, “LLMs just be that way sometimes.” Which doesn’t invalidate them as a tool, but they are in a class of their own in that regard.
replies(1): >>41931211 #
2. danielmarkbruce ◴[] No.41931211[source]
It's not a semantic issue.

Yeah, they are generally probabilistic. That has nothing to do with abstraction. There are good abstractions built on top of probabilistic concepts, like rngs, crypto libraries etc.