←back to thread

371 points ulrischa | 5 comments | | HN request time: 0.27s | source
Show context
notepad0x90 ◴[] No.43236385[source]
My fear is that LLM generated code will look great to me, I won't understand it fully but it will work. But since I didn't author it, I wouldn't be great at finding bugs in it or logical flaws. Especially if you consider coding as piecing together things instead of implementing a well designed plan. Lots of pieces making up the whole picture but a lot of those pieces are now put there by an algorithm making educated guesses.

Perhaps I'm just not that great of a coder, but I do have lots of code where if someone took a look it, it might look crazy but it really is the best solution I could find. I'm concerned LLMs won't do that, they won't take risks a human would or understand the implications of a block of code beyond its application in that specific context.

Other times, I feel like I'm pretty good at figuring out things and struggling in a time-efficient manner before arriving at a solution. LLM generated code is neat but I still have to spend similar amounts of time, except now I'm doing more QA and clean up work instead of debugging and figuring out new solutions, which isn't fun at all.

replies(13): >>43236847 #>>43237043 #>>43237101 #>>43237162 #>>43237387 #>>43237808 #>>43237956 #>>43238722 #>>43238763 #>>43238978 #>>43239372 #>>43239665 #>>43241112 #
1. JimDabell ◴[] No.43239665[source]
> My fear is that LLM generated code will look great to me, I won't understand it fully but it will work.

If you don’t understand it, ask the LLM to explain it. If you fail to get an explanation that clarifies things, write the code yourself. Don’t blindly accept code you don’t understand.

This is part of what the author was getting at when they said that it’s surfacing existing problems not introducing new ones. Have you been approving PRs from human developers without understanding them? You shouldn’t be doing that. If an LLM subsequently comes along and you accept its code without understanding it too, that’s not a new problem the LLM introduced.

replies(2): >>43240848 #>>43241055 #
2. sarchertech ◴[] No.43240848[source]
No one takes the time to fully understand all the PRs they approve. And even when you do take the time to “fully understand” the code, it’s very easy for your brain to trick you into believing you understand it.

At least when a human wrote it, someone understood the reasoning.

replies(1): >>43241744 #
3. np- ◴[] No.43241055[source]
Code reviews with a human are a two way street. When I find code that is ambiguous I can ask the developer to clarify and either explain their justification or ask them to fix it before the code is approved. I don’t have to write it myself, and if the developer is simply talking in circles then I’d be able to escalate or reject—and this is a far less likely failure case to happen with a real trusted human than an LLM. “Write the code yourself” at that point is not viable for any non-trivial team project, as people have their own contexts to maintain and commitments/projects to deliver. It’s not the typing of the code that is the hard part which is the only real benefit of LLMs that they can type super fast, it’s fully understanding the problem space. Working with another trusted human is far far different from working with an LLM.
4. sgarland ◴[] No.43241744[source]
> No one takes the time to fully understand all the PRs they approve.

I was appalled when I was being effusively thanked for catching some bugs in PRs. “No one really reads these,” is what I was told. Then why the hell do we have a required review?!

replies(1): >>43242162 #
5. sarchertech ◴[] No.43242162{3}[source]
Cargo culting.