←back to thread

192 points imasl42 | 9 comments | | HN request time: 0.241s | source | bottom
Show context
rsynnott ◴[] No.45311963[source]
This idea that you can get good results from a bad process as long as you have good quality control seems… dubious, to say the least. “Sure, it’ll produce endless broken nonsense, but as long as someone is checking, it’s fine.” This, generally, doesn’t really work. You see people _try_ it in industry a bit; have a process which produces a high rate of failures, catch them in QA, rework (the US car industry used to be notorious for this). I don’t know of any case where it has really worked out.

Imagine that your boss came to you, the tech lead of a small team, and said “okay, instead of having five competent people, your team will now have 25 complete idiots. We expect that their random flailing will sometimes produce stuff that kinda works, and it will be your job to review it all.” Now, you would, of course, think that your boss had gone crazy. No-one would expect this to produce good results. But somehow, stick ‘AI’ on this scenario, and a lot of people start to think “hey, maybe that could work.”

replies(21): >>45312004 #>>45312107 #>>45312114 #>>45312162 #>>45312253 #>>45312382 #>>45312761 #>>45312937 #>>45313024 #>>45313048 #>>45313151 #>>45313284 #>>45313721 #>>45316157 #>>45317467 #>>45317732 #>>45319692 #>>45321588 #>>45322932 #>>45326919 #>>45329123 #
Manfred ◴[] No.45312253[source]
Reviewing code from less experienced or unmotivated people is also very taxing, both in a cognitive and emotional sense. It will never approach a really good level of quality because you just give up after 4 rounds of reviews on the same feature.
replies(3): >>45319724 #>>45324333 #>>45327142 #
1. EdwardDiego ◴[] No.45319724[source]
Except humans learn from your PR comments and in other interactions with more experienced people, and so inexperienced devs become experienced devs eventually. LLMs are not so trainable.
replies(4): >>45325252 #>>45325670 #>>45330044 #>>45333523 #
2. shepherdjerred ◴[] No.45325252[source]
LLMs can learn if you provide it rules in your repo, and update those rules as you identify the common mistakes the LLM makes
3. org3 ◴[] No.45325670[source]
Some people say we're near the end of pre-training scaling, and RLHF etc is going to be more important in the future. I'm interested in trying out systems like https://github.com/OpenPipe/ART to be able to train agents to work on a particular codebase and learn from my development logs and previous interactions with agents.
4. 300hoogen ◴[] No.45330044[source]
retarded take
replies(3): >>45330467 #>>45344314 #>>45371215 #
5. dayjaby ◴[] No.45330467[source]
Can you elaborate or you call it a day after insulting?
6. krageon ◴[] No.45333523[source]
If they're unmotivated enough to not get there after four review rounds for a junior-appropriate feature, they're not going to get better. It's a little impolite to say, but if you spend any significant amount of time coaching juniors you'll encounter exactly what I'm talking about.
replies(1): >>45344319 #
7. EdwardDiego ◴[] No.45344314[source]
Thanks for the insightful reply that showed me where I went astray.
8. EdwardDiego ◴[] No.45344319[source]
I have spent plenty, rest assured.
9. player1234 ◴[] No.45371215[source]
Aggreed