←back to thread

192 points imasl42 | 7 comments | | HN request time: 0.001s | source | bottom
Show context
rsynnott ◴[] No.45311963[source]
This idea that you can get good results from a bad process as long as you have good quality control seems… dubious, to say the least. “Sure, it’ll produce endless broken nonsense, but as long as someone is checking, it’s fine.” This, generally, doesn’t really work. You see people _try_ it in industry a bit; have a process which produces a high rate of failures, catch them in QA, rework (the US car industry used to be notorious for this). I don’t know of any case where it has really worked out.

Imagine that your boss came to you, the tech lead of a small team, and said “okay, instead of having five competent people, your team will now have 25 complete idiots. We expect that their random flailing will sometimes produce stuff that kinda works, and it will be your job to review it all.” Now, you would, of course, think that your boss had gone crazy. No-one would expect this to produce good results. But somehow, stick ‘AI’ on this scenario, and a lot of people start to think “hey, maybe that could work.”

replies(21): >>45312004 #>>45312107 #>>45312114 #>>45312162 #>>45312253 #>>45312382 #>>45312761 #>>45312937 #>>45313024 #>>45313048 #>>45313151 #>>45313284 #>>45313721 #>>45316157 #>>45317467 #>>45317732 #>>45319692 #>>45321588 #>>45322932 #>>45326919 #>>45329123 #
HarHarVeryFunny ◴[] No.45313048[source]
Right, this is the exact opposite of the best practices that Edward Deming helped develop in Japan, then brought to the west.

Quality needs to come from the process, not the people.

Choosing to use a process known to be flawed, then hoping that people will catch the mistakes, doesn't seem like a great idea if the goal is quality.

The trouble is that LLMs can be used in many ways, but only some of those ways play to their strengths. Management have fantasies of using AI for everything, having either failed to understand what it is good for, or failed to learn the lessons of Japan/Deming.

replies(5): >>45313660 #>>45314264 #>>45317274 #>>45322084 #>>45329363 #
1. thunky ◴[] No.45313660[source]
> Choosing to use a process known to be flawed, then hoping that people will catch the mistakes, doesn't seem like a great idea if the goal is quality.

You're also describing the software development process prior to LLMs. Otherwise code reviews wouldn't exist.

replies(4): >>45313741 #>>45313772 #>>45314727 #>>45316771 #
2. ◴[] No.45313741[source]
3. HarHarVeryFunny ◴[] No.45313772[source]
Sure - software development is complex, but there seems to be a general attempt over time to improve the process and develop languages, frameworks and practices that remove the sources of human error.

Use of AI seems to be a regression in this regard, at least as currently used - "look ma, no hands! I've just vibe coded an autopiliot". The current focus seems to be on productivity - how many more lines of code or vibe-coded projects can you churn out - maybe because AI is still basically a novelty that people are still learning how to use.

If AI is to be used productively towards achieving business goals then the focus is going to need to mature and change to things like quality, safety, etc.

4. rsynnott ◴[] No.45314727[source]
Code reviews are useful, but I think everyone would admit that they are not _perfect_.
5. Jensson ◴[] No.45316771[source]
People have built complex working mostly bug free products without code reviews so humans are not that flawed.

With humans and code reviews now two humans looked at it. With LLM and code review of the LLM output now one human looked at it, so its not the same. LLM are still far from as reliable as humans or you could just tell the LLM to do code reviews and then it builds the entire complex product itself.

replies(1): >>45318276 #
6. CuriouslyC ◴[] No.45318276[source]
People have built complex bug free software without __formal__ code review. It's very rare to write complex bug free software without at least __informal__ code review, and it's luck, not skill.
replies(1): >>45329438 #
7. overfeed ◴[] No.45329438{3}[source]
Can't have a code review if you're coding solo[0], unless we are redefining the meaning of "code review" to the point of meaningless by including going over one's own code.

0. The dawn of video games had many titles with 1 person responsible for programming. This remains the case many indy games and small software apps and services. It's a skill that requires expertise and/or dedication.