←back to thread

Waymos crash less than human drivers

(www.understandingai.org)
345 points rbanffy | 1 comments | | HN request time: 0.206s | source
Show context
mjburgess ◴[] No.43487426[source]
Waymos choose the routes, right?

The issue with self-driving is (1) how it generalises across novel environments without "highly-available route data" and provider-chosen routes; (2) how failures are correlated across machines.

In safe driving failures are uncorrelated and safety procedures generalise. We do not yet know if, say, using self-driving very widely will lead to conditions in which "in a few incidents" more people are killed in those incidents than were ever hypothetically saved.

Here, without any confidence intervals, we're told we've saved ~70 airbag incidents in 20 mil miles. A bad update to the fleet will easily eclipse that impact.

replies(13): >>43487464 #>>43487477 #>>43487508 #>>43487579 #>>43487600 #>>43487603 #>>43487655 #>>43487741 #>>43487758 #>>43487777 #>>43489023 #>>43491131 #>>43491352 #
jrussino ◴[] No.43487477[source]
I wonder if you can decrease the impact of (2) with a policy of phased rollout for updates. I.E. you never update the whole fleet simultaneously; you update a small percentage first and confirm no significant anomalies are observed before distributing the update more widely.
replies(3): >>43487539 #>>43487580 #>>43487632 #
1. timschmidt ◴[] No.43487539[source]
Ideally you'd selectively enable the updated policy on unoccupied trips on the way to pick someone up, or returning after a drop-off, such that errors (and resultant crashes) can be caught when the car is not occupied.