←back to thread

410 points jjulius | 5 comments | | HN request time: 0s | source
Show context
graeme ◴[] No.41884966[source]
Will the review assess overall mortality of the vehicles compared to similar cars, and overall mortality while FSD is in use?
replies(7): >>41884993 #>>41885028 #>>41885048 #>>41885090 #>>41885159 #>>41885312 #>>41885407 #
1. bbor ◴[] No.41885159[source]
I get where you’re coming from and would also be interested to see, but based on the clips I’ve seen that wouldn’t be enough in this case. Of course the bias is inherent in what people choose to post (not normal and not terrible/litigable), but I think there’s enough at this point to perceive a stable pattern.

Long story short, my argument is this: it doesn’t matter if you reduce serious crashes from 100PPM to 50PPM if 25PPM of those are new crash sources, speaking from a psychological and sociological perspective. Everyone should know that driving drunk, driving distracted, driving in bad weather, and in rural areas at dawn or dusk is dangerous, and takes appropriate precautions. But what do you do if your car might crash because someone ahead flashed their high beams, or because the sun was reflecting off another car in an unusual way? Could you really load up your kids and take your hands off the wheel knowing that at any moment you might hit an unexpected edge condition?

Self driving cars are (presumably!) hard enough to trust already, since you’re giving away so much control. There’s a reason planes have to be way more than “better, statistically speaking” — we expect them to be nearly flawless, safety-wise.

replies(1): >>41885169 #
2. dragonwriter ◴[] No.41885169[source]
> But what do you do if your car might crash because someone ahead flashed their high beams, or because the sun was reflecting off another car in an unusual way?

These are -- like drunk driving, driving distract, and driving in bad weather -- things that actually do cause accidents with human drivers.

replies(3): >>41885289 #>>41885311 #>>41885316 #
3. hunter-gatherer ◴[] No.41885289[source]
The point is the choice of taking precaution part that you left out of the quote. The other day I was taking my kid to school, and when we turned east the sun was in my eyes and I couldn't see anything, so I pulled over as fast as I could and changed my route. Had I chosen to press forward and been in an accident, it is explainable (albeit still unfortunate and often unnecessary!). However, if I'm under the impression that my robot car can handle such circumstances because it does most of the time and then it glitches, that is harder to explain.
4. dfxm12 ◴[] No.41885311[source]
This language is a bit of a sticking point for me. If you're drunk driving or driving distracted, there's no "accident". You're intentionally doing something wrong and committing a crime.
5. paulryanrogers ◴[] No.41885316[source]
Indeed, yet humans can anticipate such things and rely on their experience to reason about what's happening and how to react. Like slow down or shift lanes or just move ones head for a different perfective. A Tesla with only two cameras ("because that's all humans need") is unlikely to provably match that performance for a long time.

Tesla could also change its software without telling the driver at any point.