←back to thread

410 points jjulius | 1 comments | | HN request time: 0.201s | source
Show context
AlchemistCamp ◴[] No.41889077[source]
The interesting question is how good self-driving has to be before people tolerate it.

It's clear that having half the casualty rate per distance traveled of the median human driver isn't acceptable. How about a quarter? Or a tenth? Accidents caused by human drivers are one of the largest causes of injury and death, but they're not newsworthy the way an accident involving automated driving is. It's all too easy to see a potential future where many people die needlessly because technology that could save lives is regulated into a greatly reduced role.

replies(20): >>41889114 #>>41889120 #>>41889122 #>>41889128 #>>41889176 #>>41889205 #>>41889210 #>>41889249 #>>41889307 #>>41889331 #>>41889686 #>>41889898 #>>41890057 #>>41890101 #>>41890451 #>>41893035 #>>41894281 #>>41894476 #>>41895039 #>>41900280 #
Terr_ ◴[] No.41889898[source]
> It's clear that having half the casualty rate per distance traveled of the median human driver isn't acceptable.

Even if we optimistically assume no "gotchas" in the statistics [0], distilling performance down to a casualty/injury/accident-rate can still be dangerously reductive, when the have a different distribution of failure-modes which do/don't mesh with our other systems and defenses.

A quick thought experiment to prove the point: Imagine a system which compared to human drivers had only half the rate of accidents... But many of those are because it unpredictably decides to jump the sidewalk curb and kill a targeted pedestrian.

The raw numbers are encouraging, but it represents a risk profile that clashes horribly with our other systems of road design, car design, and what incidents humans are expecting and capable of preventing or recovering-from.

[0] Ex: Automation is only being used on certain subsets of all travel which are the "easier" miles or circumstances than the whole gamut a human would handle.

replies(1): >>41894403 #
1. kelnos ◴[] No.41894403[source]
Re: gotchas: an even easier one is that the Tesla FSD statistics don't include when the car does something unsafe and the driver intervenes and takes control, averting a crash.

How often does that happen? We have no idea. Tesla can certainly tell when a driver intervenes, but they can't count every occurrence as safety-related, because a driver might take control for all sorts of reasons.

This is why we can make stronger statements about the safety of Waymo. Their software was only tested by people trained and paid to test it, who were also recording every time they had to intervene because of safety, even if there was no crash. That's a metric they could track and improve.