←back to thread

378 points todsacerdoti | 5 comments | | HN request time: 0.508s | source
Show context
xnorswap ◴[] No.44984684[source]
I won't say too much, but I recently had an experience where it was clear that when talking with a colleague, I was getting back chat GPT output. I felt sick, like this just isn't how it should be. I'd rather have been ignored.

It didn't help that the LLM was confidently incorrect.

The smallest things can throw off an LLM, such as a difference in naming between configuration and implementation.

In the human world, you can with legacy stuff get in a situation where "everyone knows" that the foo setting is actually the setting for Frob, but with an LLM it'll happily try to configure Frob or worse, try to implement Foo from scratch.

I'd always rather deal with bad human code than bad LLM code, because you can get into the mind of the person who wrote the bad human code. You can try to understand their misunderstanding. You can reason their faulty reasoning.

With bad LLM code, you're dealing with a soul-crushing machine that cannot (yet) and will not (yet) learn from its mistakes, because it does not believe it makes mistakes ( no matter how apologetic it gets ).

replies(14): >>44984808 #>>44984938 #>>44984944 #>>44984959 #>>44985002 #>>44985018 #>>44985019 #>>44985160 #>>44985639 #>>44985759 #>>44986197 #>>44986656 #>>44987830 #>>44989514 #
1. jjice ◴[] No.44985002[source]
It's so upsetting to see people take the powerful tool that is an LLM and pretend like it's a solution for everything. It's not. They're awesome at a lot of things, but they need a user that has context and knowledge to know when to apply or direct it in a different way.

The amount of absolutely shit LLM code I've reviewed at work is so sad, especially because I know the LLM could've written much better code if the prompter did a better job. The user needs to know when the solution is viable for an LLM to do or not, and a user will often need to make some manual changes anyway. When we pretend an LLM can do it all, it creates slop.

I just had a coworker a few weeks ago produce a simple function that wrapped a DB query in a function (normal so far), but wrote 250 lines of tests for it. All the code was clearly LLM generated (the comments explaining the most mundane of code was the biggest give away). The tests tested nothing. It mocked the ORM and then tested the return of the mock. We were testing that the mocking framework worked? I told him that I don't think the tests added much value since the function was so simple and that we could remove them. He said he thought they provided value, with no explanation, and merged the code.

Now fast forward to the other day and I run into the rest of the code again and now it's sinking in how bad the other LLM code was. Not that it's wrong, but it's poorly designed and full of bloat.

I have no issue with the LLM - they can do some incredible things and they're a powerful tool in the tool belt, but they are to be used in conjunction with a human that knows what they're doing (at least in the context of programming).

Kind of a rant, but I absolutely see a future where some code bases are well maintained and properly built, while others have tacked on years of vibe-coded trash that now only an LLM can even understand. And the thing that will decide which direction a code base goes in will be the engineers involved.

replies(4): >>44985052 #>>44985094 #>>44985337 #>>44988056 #
2. SoftTalker ◴[] No.44985052[source]
> I absolutely see a future where some code bases are well maintained and properly built, while others have tacked on years of vibe-coded trash

Technical debt at a payday loan interest rate.

3. pmarreck ◴[] No.44985094[source]
This is why [former Codeium] Windsurf's name is so genius.

Windsurfing (the real activity) requires multiple understandings:

1) How to sail in the first place

2) How to balance on the windsurfer while the wind is blowing on you

If you can do both of those things, you can go VERY fast and it is VERY fun.

The analogy to the first thing is "understanding software engineering" (to some extent). The analogy to the second thing is "understanding good prompting while the heat of deadlines is on you". Without both, you are just creating slop (falling in the water repeatedly and NOT going faster than either surfing or sailing alone). Junior devs that are leaning too hard on LLM assistance right off the bat are basically falling in the water repeatedly (and worse, without realizing it).

I would at minimum have a policy of "if you do not completely understand the code written by an LLM, you will not commit it." (This would be right after "you will not commit code without it being tested and the tests all passing.")

4. siva7 ◴[] No.44985337[source]
That's why some teams have the rule that the PR author isn't allowed to merge but only one of the approvers
5. chasd00 ◴[] No.44988056[source]
> Kind of a rant, but I absolutely see a future where some code bases are well maintained and properly built, while others have tacked on years of vibe-coded trash that now only an LLM can even understand.

this offshoring all over again. At first, every dev in the US was going to be out of a job because of how expensive they were compared to offshore devs. Then the results started coming back and there was some very good work done offshore but there was tons and tons of stuff that had to be unwound and fixed with onshore teams. Entire companies/careers were made dedicated to just fixing stuff coming back from offshore dev teams. In the end, it took a mix of both to realize more value per dev $