←back to thread

371 points ulrischa | 1 comments | | HN request time: 0.361s | source
Show context
notepad0x90 ◴[] No.43236385[source]
My fear is that LLM generated code will look great to me, I won't understand it fully but it will work. But since I didn't author it, I wouldn't be great at finding bugs in it or logical flaws. Especially if you consider coding as piecing together things instead of implementing a well designed plan. Lots of pieces making up the whole picture but a lot of those pieces are now put there by an algorithm making educated guesses.

Perhaps I'm just not that great of a coder, but I do have lots of code where if someone took a look it, it might look crazy but it really is the best solution I could find. I'm concerned LLMs won't do that, they won't take risks a human would or understand the implications of a block of code beyond its application in that specific context.

Other times, I feel like I'm pretty good at figuring out things and struggling in a time-efficient manner before arriving at a solution. LLM generated code is neat but I still have to spend similar amounts of time, except now I'm doing more QA and clean up work instead of debugging and figuring out new solutions, which isn't fun at all.

replies(13): >>43236847 #>>43237043 #>>43237101 #>>43237162 #>>43237387 #>>43237808 #>>43237956 #>>43238722 #>>43238763 #>>43238978 #>>43239372 #>>43239665 #>>43241112 #
sunami-ai ◴[] No.43237808[source]
Worst part is that the patterns of implementation won't be consistent across the pieces. So debug a whole codebase that was authored with LLM generated code is like having to debug a codebase where ever function was written by a different developer and no one followed any standards. I guess you can specify the coding standards in the prompt and ask it to use FP-style programming only, but I'm not sure how well it can follow.
replies(1): >>43238272 #
QuiDortDine ◴[] No.43238272[source]
Not well, at least for ChatGPT. It can't follow my custom instructions which can be summed up as "follow PEP-8 and don't leave trailing whitespace".
replies(1): >>43239996 #
jampekka ◴[] No.43239996[source]
In don't think they meant formatting details.
replies(2): >>43240256 #>>43241447 #
1. 6r17 ◴[] No.43241447[source]
Formatting is like a dot on the i; there is 200 other small details that are just completely off putting to me : - naming conventions (ias are lazy and tent to use generic names with no meaning) such as "Glass" instead of "GlassProduct" ; - error management convention

But the most troublesome to me is that it is just "pissing" out code and has no after-tough about the problem it is solving or the person it is talking to.

The number of times I have to repeat myself just to get a stubborn answer with no discussion is alarming. It does not benefit my well-being and is annoying to work with except for a bunch of exploratory cases.

I believe LLM are actually the biggest data heist organized. We believe that those models will get better at solving their jobs but the reality is that we are just giving away code, knowledge, ideas at scale, correcting the model for free, and paying to be allowed to do so. And when we watch the 37% minimum hallucination rate, we can more easily understand that the actual tough comes from the human using it.

I'm not comfortable having to argue with a machine and have to explain to it what I'm doing, how, and why - just to get it to spam me with things I have to correct afterwards anyway.

The worst is, all that data is the best insight on everything. How many people ask for X ? How much time did they spend trying to do X ? What were they trying to achieve ? Who are their customers ? etc...