←back to thread

192 points imasl42 | 1 comments | | HN request time: 0s | source
Show context
rsynnott ◴[] No.45311963[source]
This idea that you can get good results from a bad process as long as you have good quality control seems… dubious, to say the least. “Sure, it’ll produce endless broken nonsense, but as long as someone is checking, it’s fine.” This, generally, doesn’t really work. You see people _try_ it in industry a bit; have a process which produces a high rate of failures, catch them in QA, rework (the US car industry used to be notorious for this). I don’t know of any case where it has really worked out.

Imagine that your boss came to you, the tech lead of a small team, and said “okay, instead of having five competent people, your team will now have 25 complete idiots. We expect that their random flailing will sometimes produce stuff that kinda works, and it will be your job to review it all.” Now, you would, of course, think that your boss had gone crazy. No-one would expect this to produce good results. But somehow, stick ‘AI’ on this scenario, and a lot of people start to think “hey, maybe that could work.”

replies(21): >>45312004 #>>45312107 #>>45312114 #>>45312162 #>>45312253 #>>45312382 #>>45312761 #>>45312937 #>>45313024 #>>45313048 #>>45313151 #>>45313284 #>>45313721 #>>45316157 #>>45317467 #>>45317732 #>>45319692 #>>45321588 #>>45322932 #>>45326919 #>>45329123 #
jvanderbot ◴[] No.45313151[source]
What happens is a kind of feeling of developing a meta skill. It's tempting to believe the scope of what you can solve has expanded when you are self-assessed as "good" with AI.

Its the same with any "general" tech. I've seen it since genetic algorithms were all the rage. Everyone reaches for the most general tool, then assumes everything that tool might be used for is now a problem or domain they are an expert in, with zero context into that domain. AI is this times 100x, plus one layer more meta, as you can optimize over approaches with zero context.

replies(1): >>45318345 #
CuriouslyC ◴[] No.45318345[source]
That's an oversimplification. AI can genuinely expand the scope of things you can do. How it does this is a bit particular though, and bears paying attention to.

Normally, if you want to achieve some goal, there is a whole pile of tasks you need to be able to complete to achieve it. If you don't have the ability complete any one of those tasks, you will be unable to complete the goal, even if you're easily able to accomplish all the other tasks involved.

AI raises your capability floor. It isn't very effective at letting you accomplish things that are meaningfully outside your capability/comprehension, but if there are straightforward knowledge/process blockers that don't involve deeper intuition it smooths those right out.

replies(2): >>45318731 #>>45321604 #
monkeyelite ◴[] No.45321604[source]
> If you don't have the ability complete any one of those tasks, you will be unable to complete the goal

Nothing has changed. Few projects start with you knowing all the answers. In the same way AI can help you learn, you can learn from books, colleagues, and trial and error for tasks you do not know.

replies(1): >>45322154 #
CuriouslyC ◴[] No.45322154[source]
I can say from first hand experience that something has absolutely changed.

Before AI, if I had the knowledge/skill to do something on the large scale, but there were a bunch of minute/mundane details I had to figure out before solving the hard problems, I'd just lose steam from the boredom of it and go do something else. Now I delegate that stuff to AI. It isn't that I couldn't have learned how to do it, it's that I wouldn't have because it wouldn't be rewarding enough.

replies(1): >>45322483 #
1. monkeyelite ◴[] No.45322483{3}[source]
That’s great - you personally have found a tool that helps you overcome unknown problems. Other people have other methods for doing that. Maybe AI makes that more accessible in general.