←back to thread

270 points imasl42 | 1 comments | | HN request time: 0s | source
Show context
strix_varius ◴[] No.45659881[source]
To me, the most salient point was this:

> Code reviewing coworkers are rapidly losing their minds as they come to the crushing realization that they are now the first layer of quality control instead of one of the last. Asked to review; forced to pick apart. Calling out freshly added functions that are never called, hallucinated library additions, and obvious runtime or compilation errors. All while the author—who clearly only skimmed their “own” code—is taking no responsibility, going “whoopsie, Claude wrote that. Silly AI, ha-ha.”

LLMs have made Brandolini's law ("The amount of energy needed to refute bullshit is an order of magnitude larger than to produce it") perhaps understated. When an inexperienced or just inexpert developer can generate thousands of lines of code in minutes, the responsibility for keeping a system correct & sane gets offloaded to the reviewers who still know how to reason with human intelligence.

As a litmus test, look at a PR's added/removed LoC delta. LLM-written ones are almost entirely additive, whereas good senior engineers often remove as much code as they add.

replies(14): >>45660176 #>>45660177 #>>45660521 #>>45661077 #>>45661716 #>>45661920 #>>45662128 #>>45662216 #>>45662752 #>>45663314 #>>45664245 #>>45672060 #>>45679145 #>>45683742 #
1. CjHuber ◴[] No.45660176[source]
I'd say it depends on how coding assistants are used, when on autopilot I'd agree, as they don't really take the time to reflect on the work they've done before going on with the next feature of the spec. But in a collaborative process that's of course different as you are pointing out things you want to have implemented in a different way. But I get your point, most PR's you'd flag as AI generated slop are the ones where someone just ran them on autopilot and was somewhat satisfied with the outcome, while treating the resulting code as blackbox