←back to thread

258 points signa11 | 3 comments | | HN request time: 0s | source
Show context
kirubakaran ◴[] No.42732804[source]
> A major project will discover that it has merged a lot of AI-generated code

My friend works at a well-known tech company in San Francisco. He was reviewing his junior team member's pull request. When asked what a chunk of code did, the team member matter-of-factly replied "I don't know, chatgpt wrote that"

replies(16): >>42733064 #>>42733126 #>>42733357 #>>42733510 #>>42733737 #>>42733790 #>>42734461 #>>42734543 #>>42735030 #>>42735130 #>>42735456 #>>42735525 #>>42735773 #>>42736703 #>>42736792 #>>42737483 #
gunian ◴[] No.42733790[source]
the saddest part is if i wrote the code myself it would be worse lol GPT is coding at a intern level and as a dumb human being I feel sad I have been replaced but not as catastrophic as they made it seem

it's interesting to see the underlying anxiety among devs though I think there is a place in the back of their minds that knows the models will get better and better and someday could get to staff engineer level

replies(3): >>42733866 #>>42733917 #>>42734750 #
nozzlegear ◴[] No.42733866[source]
I don't think that's the concern at all. The concern (imo) is that you should at least understand what the code is doing before you accept it verbatim and add it to your company's codebase. The potential it has to introduce bugs or security flaws is too great to just accept it without understanding it.
replies(3): >>42734056 #>>42734087 #>>42734129 #
1. dataviz1000 ◴[] No.42734129[source]
I've been busy with a personal coding project. Working through problems with a LLM, which I haven't used professionally yet, has been great. Countless times in the past I've spent hours pouring over Stack Overflow and Github repository code looking for solutions. Quite often I would have to solve them myself and would always post the answer a day or two later below my question on Stack Overflow. A big milestone for a software engineer is getting to the point where any difficult problem can't be solved with internet search, asking colleagues, or asking the question no matter how well written and detailed on Stack Overflow because the problems are esoteric -- the edge of innovation is solitude. Today I give the input to the LLM, tell it what the output should be, and magically a minute later it is solved. I was thinking today about how long it has been since I was stuck and stressed on a problem. With this personal project, I'm prototyping and doing a lot of experimentation so having a LLM saves a ton of time keeping the momentum at a fast pace. The iteration process is a little different with frequent stop, refactor, cleanup, make the code consistent, and log the input and output to console to verify.

Perhaps take intern's LLM code and have the LLM do the code review. Keep reviewing the code with the LLM until the intern gets it correct.

replies(1): >>42734549 #
2. nozzlegear ◴[] No.42734549[source]
My experience with LLMs and code generation is usually the opposite, even using ChatGPT and the fancy o1 model. Maybe it's because I write a lot of F#, and the training data for that is probably low. When I'm not writing F#, then I like to write functional-style code. But either way, nine times out of ten I'm only using LLMs for "rubber ducking," as the code they give me usually falls flat on its face with obvious compiler errors.

I do agree that I feel much more productive with it LLMs though. Just being able to rubber duck my ideas with an AI and talk about code is extremely helpful, especially because I'm a solo dev/freelancer and don't usually have anyone else to do that with. And even though I don't typically use the code they give me, it's still helpful to see what the AI is thinking and explore that.

replies(1): >>42734796 #
3. dataviz1000 ◴[] No.42734796[source]
I have had similar experiences using less popular libraries. My favorite state machine library released a new version a year ago and the LLMs, regardless of prompts telling them not to, will always use the old API. I find the LLMs are worthless when organizing ideas across multiple files. And, worst, they are not by their nature capable of consistency.

On the other hand, I use d3.js for data visualization which has had a stable API for years, has likely hundreds of thousands of examples that are small contained in a single file, and has many blog posts, O'Reilly books, and tutorials. The LLMs create perfect data visualizations that are high quality. Any request to change one such as adding dynamic sliders or styling tooltips, for example, are done without errors or bugs. People who do data visualization likely will be the first to go. :(

I am concerned that new libraries will not gain traction because the LLMs haven't been trained to implement them. We will be able to implement all the popular libraries, languages, and techniques quickly, however, innovation might stall if we rely on these machines stuck in the past.