←back to thread

Waymos crash less than human drivers

(www.understandingai.org)
345 points rbanffy | 1 comments | | HN request time: 0s | source
Show context
mjburgess ◴[] No.43487426[source]
Waymos choose the routes, right?

The issue with self-driving is (1) how it generalises across novel environments without "highly-available route data" and provider-chosen routes; (2) how failures are correlated across machines.

In safe driving failures are uncorrelated and safety procedures generalise. We do not yet know if, say, using self-driving very widely will lead to conditions in which "in a few incidents" more people are killed in those incidents than were ever hypothetically saved.

Here, without any confidence intervals, we're told we've saved ~70 airbag incidents in 20 mil miles. A bad update to the fleet will easily eclipse that impact.

replies(13): >>43487464 #>>43487477 #>>43487508 #>>43487579 #>>43487600 #>>43487603 #>>43487655 #>>43487741 #>>43487758 #>>43487777 #>>43489023 #>>43491131 #>>43491352 #
arghwhat ◴[] No.43487600[source]
> The issue with self-driving is (1) how it generalises across novel environments

That's also an issue with humans though. I'd argue that traffic usually appears to flow because most of the drivers have taken a specific route daily for ages - i.e., they are not in a novel environment.

When someone drives a route for the first time, they'll be confused, do last-minute lane changes, slow down to try to make a turn, slow down more than others because because they're not 100% clear where they're supposed to go, might line up for and almost do illegal turns, might try to park in impossible places, etc.

Even when someone has driven a route a handful of times they won't know and be ready for the problem spots and where people might surprise they, they'll just know the overall direction.

(And when it is finally carved in their bones to the point where they're placing themselves perfectly in traffic according to the traffic flow and anticipating all the usual choke points and hazards, they'll get lenient.)

replies(2): >>43487679 #>>43491219 #
mjburgess ◴[] No.43487679[source]
People have eyes, ears, a voice, hands, etc.

You've a very narrow definition of novel, which is based soley on incidental features of the environment.

For animals, a novel situation is one in which their learnt skills to adapt to the environment fail, and have to acquire new skills. In this sense, drivers are rarely in novel environments.

For statistical systems, novelty can be much more narrowly defined as simply the case where sensory data fails a similar-distribution test with historical data --- this is vastly more common, since the "statistical profile of historical cases, as measured, in data" is narrow.. whilst the "situations skills apply to" is wide.

An example definition of narrow/wide, here: the amount of situations needed to acquire safety in the class of similar environments is exponential for narrow systems, and sublinear for wide ones. ie., A person can adapt a skill in a single scenario, whereas a statistical system will require exponentially more data in the measures of that class of novel scenarios.

replies(1): >>43491088 #
1. arghwhat ◴[] No.43491088[source]
I have a very wide definition of novel - any exact environment you have not yet traversed. First time taking that right turn? Novel route.

Out eyes, ears, voice and hands are quite useless when operated consciously.