←back to thread

Waymos crash less than human drivers

(www.understandingai.org)
345 points rbanffy | 1 comments | | HN request time: 0.386s | source
1. PaulRobinson ◴[] No.43491798[source]
One of the things I come back to time and again with people who push back on AI is this argument that computer systems need to be "flawless" and "perfect".

I normally ask what the error rate on the humans doing that task are. It's never 0%. So the questions then come: can we beat that number, can we iterate on it, can we improve it in a logical way?

Humans have the benefit of being able to learn by example, and once something is explained the error rate falls (for a while, until they forget), so can we show the same mechanism in AI?

Quite often people will look for "a system" to do a role. I talk about "a process". I'm interested in how we continuously improve that process, which is much easier with a computer that can be trained than a human workforce.

This takes adjustment from leaders in organisation because they realise that an AI being introduced into a role isn't like Office or Photoshop where you just buy the thing and now you have a license and you're up and running: it's an investment in a sort of very cheap member of staff who can perform at helping a very expensive member of staff improve performance, accuracy and consistency. Once you've proven out that it's working at/better than the bar of the human, you then get to scale for less money than scaling the humans.

A lot of my meetings use the metaphor "we're not trying to build a robot that automates everything, we're trying to give your highly skilled workforce better tools to get more done, more safely, more accurately, and for less money". Some people get it, some don't.

Waymo is doing this, but with a much higher level of automation by removing drivers - if they can beat humans on safety and consistency, and reduce the workforce to monitors (each watching even 2 cars, perhaps many more), you've massively changed the economics of private hire transport.

In the same way I don't need to outrun the bear that's chasing us (I only need to outrun you), I don't need a perfect AI system that is flawless, I just need something that's better than you.

And, as per some other comments here, I find it interesting that people are showing "flaws" in these systems (walls painted like a road), that would fool humans too.

Years ago someone asked what would happen if they hailed a self-driving car to pick them up and then got on a train, to see if the car then tried to follow you to pick you up. They suggested this was a "hack" in the logic. But I wonder: would you do this to a human driver? What would you expect them to do? Follow you? Not follow you? Why are you trolling cars and their drivers?

The whole debate around this stuff just needs to grow up, frankly.