←back to thread

371 points ulrischa | 1 comments | | HN request time: 0.572s | source
Show context
notepad0x90 ◴[] No.43236385[source]
My fear is that LLM generated code will look great to me, I won't understand it fully but it will work. But since I didn't author it, I wouldn't be great at finding bugs in it or logical flaws. Especially if you consider coding as piecing together things instead of implementing a well designed plan. Lots of pieces making up the whole picture but a lot of those pieces are now put there by an algorithm making educated guesses.

Perhaps I'm just not that great of a coder, but I do have lots of code where if someone took a look it, it might look crazy but it really is the best solution I could find. I'm concerned LLMs won't do that, they won't take risks a human would or understand the implications of a block of code beyond its application in that specific context.

Other times, I feel like I'm pretty good at figuring out things and struggling in a time-efficient manner before arriving at a solution. LLM generated code is neat but I still have to spend similar amounts of time, except now I'm doing more QA and clean up work instead of debugging and figuring out new solutions, which isn't fun at all.

replies(13): >>43236847 #>>43237043 #>>43237101 #>>43237162 #>>43237387 #>>43237808 #>>43237956 #>>43238722 #>>43238763 #>>43238978 #>>43239372 #>>43239665 #>>43241112 #
1. madeofpalk ◴[] No.43241112[source]
Do you not review code from your peers? Do you not search online and try to grok code from StackOverflow or documentation examples?

All of these can vary wildly in quality. Maybe its because I mostly use coding LLMs as either a research tool, or to write reasonably small and easy to follow chunks of code, but I find it no different than all of the other types of reading and understanding other people's code I already have to do.