←back to thread

410 points jjulius | 1 comments | | HN request time: 0s | source
Show context
AlchemistCamp ◴[] No.41889077[source]
The interesting question is how good self-driving has to be before people tolerate it.

It's clear that having half the casualty rate per distance traveled of the median human driver isn't acceptable. How about a quarter? Or a tenth? Accidents caused by human drivers are one of the largest causes of injury and death, but they're not newsworthy the way an accident involving automated driving is. It's all too easy to see a potential future where many people die needlessly because technology that could save lives is regulated into a greatly reduced role.

replies(20): >>41889114 #>>41889120 #>>41889122 #>>41889128 #>>41889176 #>>41889205 #>>41889210 #>>41889249 #>>41889307 #>>41889331 #>>41889686 #>>41889898 #>>41890057 #>>41890101 #>>41890451 #>>41893035 #>>41894281 #>>41894476 #>>41895039 #>>41900280 #
1. kelnos ◴[] No.41894281[source]
If Tesla's FSD was actually self-driving, maybe half the casualty rate of the median human driver would be fine.

But it's not. It requires constant supervision, and drivers sometimes have to take control (without the system disengaging on its own) in order to correct it from doing something unsafe.

If we had stats for what the casualty rate would be if every driver using it never took control back unless the car signaled it was going to disengage, I suspect that casualty rate would be much worse than the median human driver. But we don't have those stats, so we shouldn't trust it until we do.

This is why Waymo is safe and tolerated and Tesla FSD is not. Waymo test drivers record every time they have to take over control of the car for safety reasons. That was a metric they had to track and improve, or it would have been impossible to offer people rides without someone in the driver's seat.