←back to thread

Waymos crash less than human drivers

(www.understandingai.org)
345 points rbanffy | 1 comments | | HN request time: 0.257s | source
Show context
mjburgess ◴[] No.43487426[source]
Waymos choose the routes, right?

The issue with self-driving is (1) how it generalises across novel environments without "highly-available route data" and provider-chosen routes; (2) how failures are correlated across machines.

In safe driving failures are uncorrelated and safety procedures generalise. We do not yet know if, say, using self-driving very widely will lead to conditions in which "in a few incidents" more people are killed in those incidents than were ever hypothetically saved.

Here, without any confidence intervals, we're told we've saved ~70 airbag incidents in 20 mil miles. A bad update to the fleet will easily eclipse that impact.

replies(13): >>43487464 #>>43487477 #>>43487508 #>>43487579 #>>43487600 #>>43487603 #>>43487655 #>>43487741 #>>43487758 #>>43487777 #>>43489023 #>>43491131 #>>43491352 #
jrussino ◴[] No.43487477[source]
I wonder if you can decrease the impact of (2) with a policy of phased rollout for updates. I.E. you never update the whole fleet simultaneously; you update a small percentage first and confirm no significant anomalies are observed before distributing the update more widely.
replies(3): >>43487539 #>>43487580 #>>43487632 #
1. mjburgess ◴[] No.43487632[source]
One measure of robustness could be something like: the ability to resist correlation of failure states under environmental/internal shift. Danger: that under relevant time horizons the integral of injury-to-things-we-care-about is low. And then "safety", a combination: that the system resists correlating failure states in order to preserve a low expected value of injury.

The problem with machines-following-rules is that they're trivially susceptible to violations of this kind of safety. No doubt there are mitigations and strategies for minimising risk, but its not avoidable.

The danger in our risk assessment of machine systems is that we test them under non-adversarial conditions, and observe safety --- because they can quickly cause more injury than they have ever helped.

This is why we worry, of course, about "fluoride in the water" (, vaccines, etc.) and other such population-wide systems... this is the same sitation. A mass public health programme has the same risk profile.