←back to thread

270 points imasl42 | 4 comments | | HN request time: 0.498s | source
Show context
strix_varius ◴[] No.45659881[source]
To me, the most salient point was this:

> Code reviewing coworkers are rapidly losing their minds as they come to the crushing realization that they are now the first layer of quality control instead of one of the last. Asked to review; forced to pick apart. Calling out freshly added functions that are never called, hallucinated library additions, and obvious runtime or compilation errors. All while the author—who clearly only skimmed their “own” code—is taking no responsibility, going “whoopsie, Claude wrote that. Silly AI, ha-ha.”

LLMs have made Brandolini's law ("The amount of energy needed to refute bullshit is an order of magnitude larger than to produce it") perhaps understated. When an inexperienced or just inexpert developer can generate thousands of lines of code in minutes, the responsibility for keeping a system correct & sane gets offloaded to the reviewers who still know how to reason with human intelligence.

As a litmus test, look at a PR's added/removed LoC delta. LLM-written ones are almost entirely additive, whereas good senior engineers often remove as much code as they add.

replies(14): >>45660176 #>>45660177 #>>45660521 #>>45661077 #>>45661716 #>>45661920 #>>45662128 #>>45662216 #>>45662752 #>>45663314 #>>45664245 #>>45672060 #>>45679145 #>>45683742 #
Etheryte ◴[] No.45660521[source]
In my opinion this is another case where people look at it as a technical problem when it's actually a people problem. If someone does it once, they get a stern message about it. If it happens twice, it gets rejected and sent to their manager. Regardless of how you authored a pull request, you are signing off on it with your name. If it's garbage, then you're responsible.
replies(8): >>45660554 #>>45661363 #>>45661709 #>>45661887 #>>45662382 #>>45662723 #>>45663123 #>>45664880 #
1. crazygringo ◴[] No.45662723[source]
This a million times. If you do this three times, that's grounds for firing. You're literally not doing your job and lying that you are.

It's bizarre to me that people want to blame LLMs instead of the employees themselves.

(With open source projects and slop pull requests, it's another story of course.)

replies(3): >>45670823 #>>45671522 #>>45687393 #
2. ryandrake ◴[] No.45670823[source]
Seriously. What companies are these where “Oops, Claude wrote that and I have no idea what it does” is even remotely acceptable? No company I have ever worked at. This kind of behavior would be corrected immediately, at all levels from the eng TL up to director if it had to go that far.
3. dmurvihill ◴[] No.45671522[source]
Maybe that’s true of each individual case, but we still have the systemic problem of the way these AI tools interact with human psychology. It tends to turn off the user’s brain, even people who know better.
4. int_19h ◴[] No.45687393[source]
What happens in practice is teams are made "leaner" by laying off people (and, as seen in Microsoft layoffs, the people targeted are often veterans, due to their large salaries - which means that the team loses a lot of deep knowledge of the code in the process). And then the remaining ones are told to deal with the same amount of work as before, since "AI makes you more productive". I can't blame the remaining developers for saying, "fuck you then, I'm going to do the bare minimum". The blame is entirely on the management, and every single big tech company is complicit.