Most active commenters
  • HarHarVeryFunny(3)

←back to thread

192 points imasl42 | 13 comments | | HN request time: 1.123s | source | bottom
Show context
rsynnott ◴[] No.45311963[source]
This idea that you can get good results from a bad process as long as you have good quality control seems… dubious, to say the least. “Sure, it’ll produce endless broken nonsense, but as long as someone is checking, it’s fine.” This, generally, doesn’t really work. You see people _try_ it in industry a bit; have a process which produces a high rate of failures, catch them in QA, rework (the US car industry used to be notorious for this). I don’t know of any case where it has really worked out.

Imagine that your boss came to you, the tech lead of a small team, and said “okay, instead of having five competent people, your team will now have 25 complete idiots. We expect that their random flailing will sometimes produce stuff that kinda works, and it will be your job to review it all.” Now, you would, of course, think that your boss had gone crazy. No-one would expect this to produce good results. But somehow, stick ‘AI’ on this scenario, and a lot of people start to think “hey, maybe that could work.”

replies(21): >>45312004 #>>45312107 #>>45312114 #>>45312162 #>>45312253 #>>45312382 #>>45312761 #>>45312937 #>>45313024 #>>45313048 #>>45313151 #>>45313284 #>>45313721 #>>45316157 #>>45317467 #>>45317732 #>>45319692 #>>45321588 #>>45322932 #>>45326919 #>>45329123 #
1. HarHarVeryFunny ◴[] No.45313048[source]
Right, this is the exact opposite of the best practices that Edward Deming helped develop in Japan, then brought to the west.

Quality needs to come from the process, not the people.

Choosing to use a process known to be flawed, then hoping that people will catch the mistakes, doesn't seem like a great idea if the goal is quality.

The trouble is that LLMs can be used in many ways, but only some of those ways play to their strengths. Management have fantasies of using AI for everything, having either failed to understand what it is good for, or failed to learn the lessons of Japan/Deming.

replies(5): >>45313660 #>>45314264 #>>45317274 #>>45322084 #>>45329363 #
2. thunky ◴[] No.45313660[source]
> Choosing to use a process known to be flawed, then hoping that people will catch the mistakes, doesn't seem like a great idea if the goal is quality.

You're also describing the software development process prior to LLMs. Otherwise code reviews wouldn't exist.

replies(4): >>45313741 #>>45313772 #>>45314727 #>>45316771 #
3. ◴[] No.45313741[source]
4. HarHarVeryFunny ◴[] No.45313772[source]
Sure - software development is complex, but there seems to be a general attempt over time to improve the process and develop languages, frameworks and practices that remove the sources of human error.

Use of AI seems to be a regression in this regard, at least as currently used - "look ma, no hands! I've just vibe coded an autopiliot". The current focus seems to be on productivity - how many more lines of code or vibe-coded projects can you churn out - maybe because AI is still basically a novelty that people are still learning how to use.

If AI is to be used productively towards achieving business goals then the focus is going to need to mature and change to things like quality, safety, etc.

5. giovannibonetti ◴[] No.45314264[source]
> Quality needs to come from the process, not the people.

Not sure which Japanese school of management you're following, but I think Toyota-style goes against that. The process gives more autonomy to workers than, say, Ford-style, where each tiny part of the process is pre-defined.

I got the impression that Toyota-style was considered to bring better quality to the product, even though it gives people more autonomy.

replies(1): >>45314543 #
6. HarHarVeryFunny ◴[] No.45314543[source]
In an ideal world all employees would be top notch, on their game every day, never making mistakes, but the real world isn't like that. If you want repeatable quality then it needs to be baked into the process.

It's a bit like Warren Buffet saying he only wants to invest in companies that could be run by an idiot, because one day they will be.

Edward Deming actually worked with both Toyota and Ford, perhaps more foundationally at Toyota, bringing his process-based-quality ideas to both. Toyota's management style is based around continuous process improvement, combined with the employee empowerment that you refer to.

7. rsynnott ◴[] No.45314727[source]
Code reviews are useful, but I think everyone would admit that they are not _perfect_.
8. Jensson ◴[] No.45316771[source]
People have built complex working mostly bug free products without code reviews so humans are not that flawed.

With humans and code reviews now two humans looked at it. With LLM and code review of the LLM output now one human looked at it, so its not the same. LLM are still far from as reliable as humans or you could just tell the LLM to do code reviews and then it builds the entire complex product itself.

replies(1): >>45318276 #
9. GarnetFloride ◴[] No.45317274[source]
Oh man, that's what I've been smelling with all this. It's the Red Bead Experiment, all over again. https://www.youtube.com/watch?v=ckBfbvOXDvU
10. CuriouslyC ◴[] No.45318276{3}[source]
People have built complex bug free software without __formal__ code review. It's very rare to write complex bug free software without at least __informal__ code review, and it's luck, not skill.
replies(1): >>45329438 #
11. stockresearcher ◴[] No.45322084[source]
> Deming helped develop in Japan

Deming’s process was about how to operate a business in a capital-intensive industry when you don’t have a lot of capital (with market-acceptable speed and quality). That you could continue to push it and raise quality as you increased the amount of capital you had was a side-effect, and the various Japanese automakers demonstrated widely different commitments to it.

And I’m sure you know that he started formulating his ideas during the Great Depression and refined them while working on defense manufacturing in the US during WWII.

12. overfeed ◴[] No.45329363[source]
> Management have fantasies of using AI for everything, having either failed to understand what it is good for, or failed to learn the lessons of Japan/Deming.

Third option: they want to automate all jobs before the competition does. Think of it as AWS, but for labor.

13. overfeed ◴[] No.45329438{4}[source]
Can't have a code review if you're coding solo[0], unless we are redefining the meaning of "code review" to the point of meaningless by including going over one's own code.

0. The dawn of video games had many titles with 1 person responsible for programming. This remains the case many indy games and small software apps and services. It's a skill that requires expertise and/or dedication.