←back to thread

270 points imasl42 | 6 comments | | HN request time: 0.33s | source | bottom
Show context
strix_varius ◴[] No.45659881[source]
To me, the most salient point was this:

> Code reviewing coworkers are rapidly losing their minds as they come to the crushing realization that they are now the first layer of quality control instead of one of the last. Asked to review; forced to pick apart. Calling out freshly added functions that are never called, hallucinated library additions, and obvious runtime or compilation errors. All while the author—who clearly only skimmed their “own” code—is taking no responsibility, going “whoopsie, Claude wrote that. Silly AI, ha-ha.”

LLMs have made Brandolini's law ("The amount of energy needed to refute bullshit is an order of magnitude larger than to produce it") perhaps understated. When an inexperienced or just inexpert developer can generate thousands of lines of code in minutes, the responsibility for keeping a system correct & sane gets offloaded to the reviewers who still know how to reason with human intelligence.

As a litmus test, look at a PR's added/removed LoC delta. LLM-written ones are almost entirely additive, whereas good senior engineers often remove as much code as they add.

replies(14): >>45660176 #>>45660177 #>>45660521 #>>45661077 #>>45661716 #>>45661920 #>>45662128 #>>45662216 #>>45662752 #>>45663314 #>>45664245 #>>45672060 #>>45679145 #>>45683742 #
Etheryte ◴[] No.45660521[source]
In my opinion this is another case where people look at it as a technical problem when it's actually a people problem. If someone does it once, they get a stern message about it. If it happens twice, it gets rejected and sent to their manager. Regardless of how you authored a pull request, you are signing off on it with your name. If it's garbage, then you're responsible.
replies(8): >>45660554 #>>45661363 #>>45661709 #>>45661887 #>>45662382 #>>45662723 #>>45663123 #>>45664880 #
tyleo ◴[] No.45661363[source]
I agree and I’m surprised more people don’t get this. Bad behaviors aren’t suddenly okay because AI makes them easy.

If you are wasting time you may be value negative to a business. If you are value negative over the long run you should be let go.

We’re ultimately here to make money, not just pump out characters into text files.

replies(2): >>45664752 #>>45673570 #
1. jackblemming ◴[] No.45664752[source]
How do you know the net value add isn’t greater with the AI, even if it requires more code review comments (and angrier coworkers)?
replies(4): >>45666403 #>>45667495 #>>45671416 #>>45682115 #
2. y0eswddl ◴[] No.45666403[source]
all the recent studies (that are constantly posted here) that say so.
replies(1): >>45667013 #
3. CuriouslyC ◴[] No.45667013[source]
The Stanford study showed mixed results, and you can stratify the data to show that AI failures are driven by process differences as much as circumstantial differences.

The MIT study just has a whole host of problems, but ultimately it boils down to: giving your engineers cursor and telling them to be 10x doesn't work. Beyond each individual engineer being skilled at using AI, you have to adjust your process for it. Code review is a perfect example; until you optimize the review process to reduce human friction, AI tools are going to be massively bottlenecked.

4. tyleo ◴[] No.45667495[source]
Because we know what the value is without AI. I’ve been in the industry for about ten years and others have been in it longer than I have. Folks have enough experience to know what good looks like and to know what bad looks like.
5. dmurvihill ◴[] No.45671416[source]
You have it exactly backwards. If you are consuming my time with slop, it’s on you to prove there’s still a net benefit.
6. nucleardog ◴[] No.45682115[source]
In a scenario where what we're doing is describing and assigning work to someone, having them paste that into an LLM, sending the LLM changes to me to review, me reviewing the LLM output, them pasting that back into the LLM and sending the results for me to review...

What value is that person adding? I can fire up claude code/cursor/whatever myself and get the same result with less overhead. It's not a matter of "is AI valuable", it's a matter of "is this person adding value to the process". In the above case... no, none at all.