←back to thread

371 points ulrischa | 1 comments | | HN request time: 0.238s | source
Show context
notepad0x90 ◴[] No.43236385[source]
My fear is that LLM generated code will look great to me, I won't understand it fully but it will work. But since I didn't author it, I wouldn't be great at finding bugs in it or logical flaws. Especially if you consider coding as piecing together things instead of implementing a well designed plan. Lots of pieces making up the whole picture but a lot of those pieces are now put there by an algorithm making educated guesses.

Perhaps I'm just not that great of a coder, but I do have lots of code where if someone took a look it, it might look crazy but it really is the best solution I could find. I'm concerned LLMs won't do that, they won't take risks a human would or understand the implications of a block of code beyond its application in that specific context.

Other times, I feel like I'm pretty good at figuring out things and struggling in a time-efficient manner before arriving at a solution. LLM generated code is neat but I still have to spend similar amounts of time, except now I'm doing more QA and clean up work instead of debugging and figuring out new solutions, which isn't fun at all.

replies(13): >>43236847 #>>43237043 #>>43237101 #>>43237162 #>>43237387 #>>43237808 #>>43237956 #>>43238722 #>>43238763 #>>43238978 #>>43239372 #>>43239665 #>>43241112 #
tokioyoyo ◴[] No.43237101[source]
The big argument against it is, at some point, there’s a chance, that you won’t really need to understand what the code does. LLMs writes code, LLMs write tests, you find bugs, LLM fixes code, LLM adds test cases for the found bug. Rinse and repeat.
replies(2): >>43237342 #>>43240215 #
1. SamPatt ◴[] No.43237342[source]
For fairly simple projects built from scratch, we're already there.

Claude Code has been doing all of this for me on my latest project. It's remarkable.

It seems inevitable it'll get there for larger and more complex code bases, but who knows how far away that is.