Most active commenters
  • kelnos(12)
  • gambiting(7)
  • akira2501(3)
  • Terr_(3)
  • V99(3)

←back to thread

410 points jjulius | 92 comments | | HN request time: 0.605s | source | bottom
1. AlchemistCamp ◴[] No.41889077[source]
The interesting question is how good self-driving has to be before people tolerate it.

It's clear that having half the casualty rate per distance traveled of the median human driver isn't acceptable. How about a quarter? Or a tenth? Accidents caused by human drivers are one of the largest causes of injury and death, but they're not newsworthy the way an accident involving automated driving is. It's all too easy to see a potential future where many people die needlessly because technology that could save lives is regulated into a greatly reduced role.

replies(20): >>41889114 #>>41889120 #>>41889122 #>>41889128 #>>41889176 #>>41889205 #>>41889210 #>>41889249 #>>41889307 #>>41889331 #>>41889686 #>>41889898 #>>41890057 #>>41890101 #>>41890451 #>>41893035 #>>41894281 #>>41894476 #>>41895039 #>>41900280 #
2. iovrthoughtthis ◴[] No.41889114[source]
at least 10x better than a human
replies(1): >>41889127 #
3. triyambakam ◴[] No.41889120[source]
Hesitation around self-driving technology is not just about the raw accident rate, but the nature of the accidents. Self-driving failures often involve highly visible, preventable mistakes that seem avoidable by a human (e.g., failing to stop for an obvious obstacle). Humans find such incidents harder to tolerate because they can seem fundamentally different from human error.
replies(1): >>41889173 #
4. becquerel ◴[] No.41889122[source]
My dream is of a future where humans are banned from driving without special licenses.
replies(2): >>41889187 #>>41889200 #
5. becquerel ◴[] No.41889127[source]
I believe Waymo has already beaten this metric.
replies(1): >>41889189 #
6. Arainach ◴[] No.41889128[source]
This is about lying to the public and stoking false expectations for years.

If it's "fully self driving" Tesla should be liable for when its vehicles kill people. If it's not fully self driving and Tesla keeps using that name in all its marketing, regardless of any fine print, then Tesla should be liable for people acting as though their cars could FULLY self drive and be sued accordingly.

You don't get to lie just because you're allegedly safer than a human.

replies(4): >>41889149 #>>41889881 #>>41890885 #>>41893587 #
7. jeremyjh ◴[] No.41889149[source]
I think this is the answer: the company takes on full liability. If a Tesla is Fully Self Driving then Tesla is driving it. The insurance market will ensure that dodgy software/hardware developers exit the industry.
replies(4): >>41889184 #>>41890181 #>>41890189 #>>41890241 #
8. crazygringo ◴[] No.41889173[source]
Exactly -- it's not just the overall accident rate, but the rate per accident type.

Imagine if self-driving is 10x safer on freeways, but on the other hand is 3x more likely to run over your dog in the driveway.

Or it's 5x safer on city streets overall, but actually 2x worse in rain and ice.

We're fundamentally wired for loss aversion. So I'd say it's less about what the total improvement rate is, and more about whether it has categorizable scenarios where it's still worse than a human.

9. gambiting ◴[] No.41889176[source]
>>. How about a quarter? Or a tenth?

The answer is zero. An airplane autopilot has increased the overall safety of airplanes by several orders of magnitude compared to human pilots, but literally no errors in its operation are tolerated, whether they are deadly or not. The exact same standard has to apply to cars or any automated machine for that matter. If there is any issue discovered in any car with this tech then it should be disabled worldwide until the root cause is found and eliminated.

>> It's all too easy to see a potential future where many people die needlessly because technology that could save lives is regulated into a greatly reduced role.

I really don't like this argument, because we could already prevent literally all automotive deaths tomorrow through existing technology and legislation and yet we are choosing not to do this for economic and social reasons.

replies(6): >>41889247 #>>41889255 #>>41890925 #>>41891202 #>>41891217 #>>41893571 #
10. blagie ◴[] No.41889184{3}[source]
This is very much what I would like to see.

The price of insurance is baked into the price of a car. If the car is as safe as I am, I pay the same price in the end. If it's safer, I pay less.

From my perspective:

1) I would *much* rather have Honda kill someone than myself. If I killed someone, the psychological impact on myself would be horrible. In the city I live in, I dread ageing; as my reflexes get slower, I'm more and more likely to kill someone.

2) As a pedestrian, most of the risk seems to come from outliers -- people who drive hyper-aggressively. Replacing all cars with a median driver would make me much safer (and traffic, much more predictable).

If we want safer cars, we can simply raise insurance payouts, and vice-versa. The market works everything else out.

But my stress levels go way down, whether in a car, on a bike, or on foot.

replies(1): >>41889228 #
11. gambiting ◴[] No.41889187[source]
So.........like right now you mean? You need a special licence to drive on a public road right now.
replies(3): >>41889262 #>>41889312 #>>41894441 #
12. szundi ◴[] No.41889189{3}[source]
Waymo is limited to cities that their engineers has to map and this map maintained.

You cannot put a waymo in a new city before that. With Tesla, what you get is universal.

replies(4): >>41889238 #>>41889466 #>>41894346 #>>41894466 #
13. FireBeyond ◴[] No.41889200[source]
And yet Tesla's FSD never passed a driving test.
replies(1): >>41894570 #
14. croes ◴[] No.41889205[source]
> It's clear that having half the casualty rate per distance traveled of the median human driver isn't acceptable.

Were the Teslas driving under all weather conditions at any location like humans do or is it just cherry picked from the easy travelling conditions?

15. jakelazaroff ◴[] No.41889210[source]
I think we should not be satisfied with merely “better than a human”. Flying is so safe precisely because we treat any casualty as unacceptable. We should aspire to make automobiles at least that safe.
replies(4): >>41890066 #>>41894364 #>>41894433 #>>41895416 #
16. gambiting ◴[] No.41889228{4}[source]
>> I would much rather have Honda kill someone than myself. If I killed someone, the psychological impact on myself would be horrible.

Except that we know that it doesn't work like that. Train drivers are ridden with extreme guilt every time "their" train runs over someone, even though they know that logically there was absolutely nothing they could have done to prevent it. Don't see why it would be any different here.

>>If we want safer cars, we can simply raise insurance payouts, and vice-versa

In what way? In the EU the minimum covered amount for any car insurance is 5 million euro, it has had no impact on the safety of cars. And of course the recent increase in payouts(due to the general increase in labour and parts cost) has led to a dramatic increase in insurance premiums which in turn has lead to a drastic increase in the number of people driving without insurance. So now that needs increased policing and enforcement, which we pay for through taxes. So no, market doesn't "work everything out".

replies(2): >>41890554 #>>41894294 #
17. ◴[] No.41889238{4}[source]
18. esaym ◴[] No.41889247[source]
You can't equate airplane safety with automotive safety. I worked at an aircraft repair facility doing government contracts for a number of years. In one instance, somebody lost the toilet paper holder for one of the aircraft. This holder was simply a piece of 10 gauge wire that was bent in a way to hold it and supported by wire clamps screwed to the wall. Making a new one was easy but since it was a new part going on the aircraft we had to send it to a lab to be certified to hold a roll of toilet paper to 9 g's. In case the airplane crashed you wouldn't want a roll of toilet paper flying around I guess. And that cost $1,200.
replies(1): >>41889341 #
19. akira2501 ◴[] No.41889249[source]
> traveled of the median human driver isn't acceptable.

It's completely acceptable. In fact the numbers are lower than they have been since we've started driving.

> Accidents caused by human drivers

Are there any other types of drivers?

> are one of the largest causes of injury and death

More than half the fatalities on the road are actually caused by the use of drugs and alcohol. The statistics are very clear on this. Impaired people cannot drive well. Non impaired people drive orders of magnitude better.

> technology that could save lives

There is absolutely zero evidence this is true. Everyone is basing this off of a total misunderstanding of the source of fatalities and a willful misapprehension of the technology.

replies(2): >>41889370 #>>41894388 #
20. travem ◴[] No.41889255[source]
> The answer is zero

If autopilot is 10x safer then preventing its use would lead to more preventable deaths and injuries than allowing it.

I agree that it should be regulated and incidents thoroughly investigated, however letting perfect be the enemy of good leads to stagnation and lack of practical improvement and greater injury to the population as a whole.

replies(2): >>41889357 #>>41889900 #
21. seizethecheese ◴[] No.41889262{3}[source]
Geez, clearly they mean like a CDL
22. aithrowawaycomm ◴[] No.41889307[source]
Many people don't (and shouldn't) take the "half the casualty rate" at face value. My biggest concern is that Waymo and Tesla are juking the stats to make self-driving cars seem safer than they really are. I believe this is largely an unintentional consequence of bad actuary science coming from bad qualitative statistics; the worst kind of lying with numbers is lying to yourself.

The biggest gap in these studies: I have yet to see a comparison with human drivers that filters out DUIs, reckless speeding, or mechanical failures. Without doing this it is simply not a fair comparison, because:

1) Self-driving cars won't end drunk driving unless it's made mandatory by outlawing manual driving or ignition is tied to a breathalyzer. Many people will continue to make the dumb decision to drive themselves home because they are drunk and driving is fun. This needs regulation, not technology. And DUIs need to be filtered from the crash statistics when comparing with Waymo.

2) A self-driving car which speeds and runs red lights might well be more dangerous than a similar human, but the data says nothing about this since Waymo is currently on their best behavior. Yet Tesla's own behavior and customers prove that there is demand for reckless self-driving cars, and manufacturers will meet the demand unless the law steps in. Imagine a Waymo competitor that promises Uber-level ETAs for people in a hurry. Technology could in theory solve this but in practice the market could make things worse for several decades until the next research breakthrough. Human accidents coming from distraction are a fair comparison to Waymo, but speeding or aggressiveness should be filtered out. The difficulty of doing so is one of the many reasons I am so skeptical of these stats.

3) Mechanical failures are a hornets' nest of ML edge cases that might work in the lab but fail miserably on the road. Currently it's not a big deal because the cars are shiny and new. Eventually we'll have self-driving clunkers owned by drivers who don't want to pay for the maintenance.

And that's not even mentioning that Waymos are not self-driving, they rely on close remote oversight to guide AI through the many billions of common-sense problems that computets will not able to solve for at least the next decade, probably much longer. True self-driving cars will continue to make inexplicably stupid decisions: these machines are still much dumber than lizards. Stories like "the Tesla slammed into an overturned tractor trailer because the AI wasn't trained on overturned trucks" are a huge problem and society will not let Tesla try to launder it away with statistics.

Self-driving cars might end up saving lives. But would they save more lives than adding mandatory breathalyzers and GPS-based speed limits? And if market competition overtakes business ethics, would they cost more lives than they save? The stats say very little about this.

replies(1): >>41894458 #
23. nkrisc ◴[] No.41889312{3}[source]
The problem is it’s obviously too easy to get one and keep one, based on some of the drivers I see on the road.
replies(1): >>41889365 #
24. smitty1110 ◴[] No.41889331[source]
There’s two things going on here with there average person that you need to overcome: That when Tesla dodges responsibility all anyone sees is a liar, and that people amalgamate all the FSD crashes and treat the system like a dangerous local driver that nobody can get off the road.

Tesla markets FSD like it’s a silver bullet, and the name is truly misleading. The fine print says you need attention and all that. But again, people read “Full Self Driving” and all the marketing copy and think the system is assuming responsibility for the outcomes. Then a crash happens, Tesla throws the driver under the bus, and everyone gets a bit more skeptical of the system. Plus, doing that to a person rubs people the wrong way, and is in some respects a barrier to sales.

Which leads to the other point: People are tallying up all the accidents and treating the system like a person, and wondering why this dangerous driver is still on the road. Most accidents with dead pedestrian start with someone doing something stupid, which is when they assume all responsibility, legally speaking. Drunk, speeding, etc. Normal drivers in poor conditions slow down and drive carefully. People see this accident, and treat FSD like a serial drunk driver. It’s to the point that I know people that openly say they treat teslas on roads like they’re erratic drivers just for existing.

Until Elon figures out how to fix his perception problem, the calls for investigations and to keep his robotaxis is off the road will only grow.

25. gambiting ◴[] No.41889341{3}[source]
No, I'm pretty sure I can in this regard - any automotive "autopilot" has to be held to the same standard. It's either zero accidents or nothing.
replies(1): >>41891911 #
26. gambiting ◴[] No.41889357{3}[source]
>>If autopilot is 10x safer then preventing its use would lead to more preventable deaths and injuries than allowing it.

And yet whenever there is a problem with any plane autopilot it's preemptively disabled fleet wide and pilots have to fly manually even though we absolutely beyond a shadow of a doubt know that it's less safe.

If an automated system makes a wrong decision and it contributes to harm/death then it cannot be allowed on public roads full stop, no matter how many lives it saves otherwise.

replies(3): >>41889557 #>>41891095 #>>41891568 #
27. gambiting ◴[] No.41889365{4}[source]
That sounds like a legislative problem where you live, sure it can be fixed by overbearing technology but we already have all the tools we need to fix it, we are just choosing not to for some reason.
28. blargey ◴[] No.41889370[source]
> Non impaired people drive orders of magnitude better.

That raises the question - how many impaired driver-miles are being baked into the collision statistics for "median human" driver-miles? Shouldn't we demand non-impaired driving as the standard for automation, rather than "averaged with drunk / phone-fiddling /senile" driving? We don't give people N-mile allowances for drunk driving based on the size of the drunk driver population, after all.

replies(2): >>41892049 #>>41894391 #
29. RivieraKid ◴[] No.41889466{4}[source]
Waymo is robust to removing the map / lidars / radars / cameras or adding inaccuracies to any of these 4 inputs.

(Not sure if this is true for the production system or the one they're still working on.)

30. exe34 ◴[] No.41889557{4}[source]
> And yet whenever there is a problem with any plane autopilot it's preemptively disabled fleet wide and pilots have to fly manually even though we absolutely beyond a shadow of a doubt know that it's less safe.

just because we do something dumb in one scenario isn't a very persuasive reason to do the same in another.

> then it cannot be allowed on public roads full stop, no matter how many lives it saves otherwise.

ambulances sometimes get into accidents - we should ban all ambulances, no matter how many lives they save otherwise.

replies(1): >>41894362 #
31. danans ◴[] No.41889686[source]
> The interesting question is how good self-driving has to be before people tolerate it.

It's pretty simple: as good as it can be given available technologies and techniques, without sacrificing safety for cost or style.

With AVs, function and safety should obviate concerns of style, cost, and marketing. If that doesn't work with your business model, well tough luck.

Airplanes are far safer than cars yet we subject their manufacturers to rigorous standards, or seemingly did until recently, as the 737 max saga has revealed. Even still the rigor is very high compared to road vehicles.

And AVs do have to be way better than people at driving because they are machines that have no sense of human judgement, though they operate in a human physical context.

Machines run by corporations are less accountable than human drivers, not at the least because of the wealth and legal armies of those corporations who may have interests other than making the safest possible AV.

replies(1): >>41889699 #
32. mavhc ◴[] No.41889699[source]
Surely the number of cars than can do it, and the price, also matters, unless you're going to ban private cars
replies(1): >>41890064 #
33. SoftTalker ◴[] No.41889881[source]
It’s your car, so ultimately the liability is yours. That’s why you have insurance. If Tesla retains ownership, and just lets you drive it, then they have (more) liability.
replies(1): >>41894328 #
34. Terr_ ◴[] No.41889898[source]
> It's clear that having half the casualty rate per distance traveled of the median human driver isn't acceptable.

Even if we optimistically assume no "gotchas" in the statistics [0], distilling performance down to a casualty/injury/accident-rate can still be dangerously reductive, when the have a different distribution of failure-modes which do/don't mesh with our other systems and defenses.

A quick thought experiment to prove the point: Imagine a system which compared to human drivers had only half the rate of accidents... But many of those are because it unpredictably decides to jump the sidewalk curb and kill a targeted pedestrian.

The raw numbers are encouraging, but it represents a risk profile that clashes horribly with our other systems of road design, car design, and what incidents humans are expecting and capable of preventing or recovering-from.

[0] Ex: Automation is only being used on certain subsets of all travel which are the "easier" miles or circumstances than the whole gamut a human would handle.

replies(1): >>41894403 #
35. penjelly ◴[] No.41889900{3}[source]
I'd challenge the legitimacy of the claim that it's 10x safer, or even safer at all. The safety data provided isn't compelling to me, it can be games or misrepresented in various ways, as pointed out by others.
replies(1): >>41890184 #
36. __loam ◴[] No.41890057[source]
The problem is that Tesla is way behind the industry standards here and it's misrepresenting how good their tech is.
37. danans ◴[] No.41890064{3}[source]
> Surely the number of cars than can do it, and the price, also matters, unless you're going to ban private cars

Indeed, like this: the more cars sold that claim fully autonomous capability, and the more affordable they get, the higher the standards should be compared to their AV predecessors, even if they have long eclipsed human driver's safety record.

If this is unpalatable, then let's assign 100% liability with steep monetary penalties to the AV manufacturer for any crash that happens under autonomous driving mode.

38. aantix ◴[] No.41890066[source]
Before FSD is allowed on public roads?

It’s a net positive, saving lives right now.

39. alkonaut ◴[] No.41890101[source]
> How about a quarter? Or a tenth?

Probably closer to the latter. The "skin in the game" (physically) argument makes me more willing to accept drunk drivers than greedy manufacturers when it comes to making mistakes or being negligent.

40. tensor ◴[] No.41890181{3}[source]
I’m for this as long as the company also takes on liability for human errors they could prevent. I’d want to see cars enforcing speed limits and similar things. Humans are too dangerous to drive.
41. yCombLinks ◴[] No.41890184{4}[source]
That claim wasn't made. It was a hypothetical, what if it was 10x safer? Then would people tolerate it.
replies(1): >>41896727 #
42. stormfather ◴[] No.41890189{3}[source]
That would be good because it would incentivize all FSD cars communicating with each other. Imagine how safe driving would be if they are all broadcasting their speed and position to each other. And each vehicle sending/receiving gets cheaper insurance.
replies(2): >>41890733 #>>41899496 #
43. KoolKat23 ◴[] No.41890241{3}[source]
That's just reducing the value of a life to a number. It can be gamed to a situation where it's just more profitable to mow down people.

What's an acceptable number/financial cost is also just an indirect approximated way of implementing a more direct/scientific regulation. Not everything needs to be reduced to money.

replies(1): >>41890691 #
44. sebzim4500 ◴[] No.41890451[source]
>It's clear that having half the casualty rate per distance traveled of the median human driver isn't acceptable.

Are you sure? Right now FSD is active with no one actually knowing its casualty rate, and the for the most part the only people upset about it are terminally online people on twitter or luddites on HN.

45. blagie ◴[] No.41890554{5}[source]
> Except that we know that it doesn't work like that. Train drivers are ridden with extreme guilt every time "their" train runs over someone, even though they know that logically there was absolutely nothing they could have done to prevent it. Don't see why it would be any different here.

It's not binary. Someone dying -- even with no involvement -- can be traumatic. I've been in a position where I could have taken actions to prevent someone from being harmed. Rationally not my fault, but in retrospect, I can describe the exact set of steps needed to prevent it. I feel guilty about it, even though I know rationally it's not my fault (there's no way I could have known ahead of time).

However, it's a manageable guilt. I don't think it would be if I knew rationally that it was my fault.

> So no, market doesn't "work everything out".

Whether or not a market works things out depends on issues like transparency and information. Parties will offload costs wherever possible. In the model you gave, there is no direct cost to a car maker making less safe cars or vice-versa. It assumes the car buyer will even look at insurance premiums, and a whole chain of events beyond that.

That's different if it's the same party making cars, paying money, and doing so at scale.

If Tesla pays for everyone damaged in any accident a Tesla car has, then Tesla has a very, very strong incentive to make safe cars to whatever optimum is set by the damages. Scales are big enough -- millions of cars and billions of dollars -- where Tesla can afford to hire actuaries and a team of analysts to make sure they're at the optimum.

As an individual car buyer, I have no chance of doing that.

Ergo, in one case, the market will work it out. In the other, it won't.

46. jeremyjh ◴[] No.41890691{4}[source]
There is no way to game it successfully; if your insurance costs are much higher than your competitors you will lose in the long run. That doesn’t mean there can’t be other penalties when there is gross negligence.
replies(1): >>41891750 #
47. Terr_ ◴[] No.41890733{4}[source]
It goes kinda dsytopic if access to the network becomes a monopolistic barrier.
replies(1): >>41896451 #
48. mrpippy ◴[] No.41890885[source]
Tesla officially renamed it to “Full Self Driving (supervised)” a few months ago, previously it was “Full Self Driving (beta)”

Both names are ridiculous, for different reasons. Nothing called a “beta” should be tested on public roads without a trained employee supervising it (i.e. being paid to pay attention). And of course it was not “full”, it always required supervision.

And “Full Self Driving (supervised)” is an absurd oxymoron. Given the deaths and crashes that we’ve already seen, I’m skeptical of the entire concept of a system that works 98% of the time, but also needs to be closely supervised for the 2% of the time when it tries to kill you or others (with no alerts).

It’s an abdication of duty that NHTSA has let this continue for so long, they’ve picked up the pace recently and I wouldn’t be surprised if they come down hard on Tesla (unless Trump wins, in which case Elon will be put in charge of NHTSA, the SEC, and FAA)

replies(1): >>41892629 #
49. V99 ◴[] No.41890925[source]
Airplane autopilots follow a lateral & sometimes vertical path through the sky prescribed by the pilot(s). They are good at doing that. This does increase safety, because it frees up the pilot(s) from having to carefully maintain a straight 3d line through the sky for hours at a time.

But they do not listen to ATC. They do not know where other planes are. They do not keep themselves away from other planes. Or the ground. Or a flock of birds. They do not handle emergencies. They make only the most basic control-loop decisions about the control surface and power (if even autothrottle equipped, otherwise that's still the meatbag's job) changes needed to follow the magenta line drawn by the pilot given a very small set of input data (position, airspeed, current control positions, etc).

The next nearest airplane is typically at least 3 miles laterally and/or 500' vertically away, because the errors allowed with all these components are measured in hundreds of feet.

None of this is even remotely comparable to a car using a dozen cameras (or lidar) to make real-time decisions to drive itself around imperfect public streets full of erratic drivers and other pedestrians a few feet away.

What it is a lot like is what Tesla actually sells (despite the marketing name). Yes it's "flying" the plane, but you're still responsible for making sure it's doing the right thing, the right way, and not and not going to hit anything or kill anybody.

replies(2): >>41894377 #>>41895396 #
50. CrimsonRain ◴[] No.41891095{4}[source]
So your only concern is, when something goes wrong, need someone to blame. Who cares about lives saved. Vaccines can cause adverse effects. Let's ban all of them.

If people like you were in charge of anything, we'd still be hitting rocks for fire in caves.

replies(1): >>41899449 #
51. Aloisius ◴[] No.41891202[source]
Autopilots aren't held to a zero error standard let alone a zero accident standard.
52. peterdsharpe ◴[] No.41891217[source]
> literally no errors in its operation are tolerated

Aircraft designer here, this is not true. We typically certify to <1 catastrophic failure per 1e9 flight hours. Not zero.

53. Aloisius ◴[] No.41891568{4}[source]
Depends on what one considers a "problem." As long as the autopilot's failures conditions and mitigation procedures are documented, the burden is largely shifted to the operator.

Autopilot didn't prevent slamming into a mountain? Not a problem as long as it wasn't designed to.

Crashed on landing? No problem, the manual says not to operate it below 500 feet.

Runaway pitch trim? The manual says you must constantly be monitoring the autopilot and disengage it when it's not operating as expected and to pull the autopilot and pitch trim circuit breakers. Clearly insufficient operator training is to blame.

54. KoolKat23 ◴[] No.41891750{5}[source]
Who said management and shareholders are in it for the long run. Plenty of examples where businesses are purely run in the short term. Bonuses and stock pumps.
55. murderfs ◴[] No.41891911{4}[source]
This only works for aerospace because everything and everyone is held to that standard. It's stupid to hold automotive autopilots to the same standard as a plane's autopilot when a third of fatalities in cars are caused by the pilots being drunk.
replies(1): >>41894352 #
56. akira2501 ◴[] No.41892049{3}[source]
Motorcycles account for a further 15% of all fatalities in a typical year. Weather is often a factor. Road design is sometimes a factor, remembering several rollover crashes that ended in a body of water and no one in the vehicle surviving. Likewise ejections during fatalities due to lack of seatbelt use is also noticeable.

Once you dig into the data you see that almost every crash, at this point in history, is really a mini-story detailing the confluence of several factors that turned a basic accident into something fatal.

Also, and I only saw this once, but if you literally have a heart attack behind the wheel, you are technically a roadway fatality. The driver was 99. He just died while sitting in slow moving traffic.

Which brings me to my final point which is the rear seats in automobiles are less safe than the front seats. This is true for almost every vehicle on the road. You see _a lot_ of accidents where two 40 to 50 year old passengers are up front and two 70 to 80 year old passengers are in back. The ones up front survive. One or both passengers in the back typically die.

57. ilyagr ◴[] No.41892629{3}[source]
I hope they soon rename it into "Fully Supervised Driving".
58. moogly ◴[] No.41893035[source]
> Accidents caused by human drivers are one of the largest causes of injury and death

In some parts of the world. Perhaps some countries should look deeper into why and why self-driving cars might not be the No. 1 answer to reduce traffic accidents.

59. AlchemistCamp ◴[] No.41893571[source]
> ”The answer is zero…”

> ”If there is any issue discovered in any car with this tech then it should be disabled worldwide until the root cause is found and eliminated.”

This would literally cost millions of needless deaths in a situation where AI drivers had 1/10th the accident injury rate of human drivers.

60. awongh ◴[] No.41893587[source]
Also force other auto makers to be liable when their over-tall SUVs cause more deaths than sedan type cars.
61. kelnos ◴[] No.41894281[source]
If Tesla's FSD was actually self-driving, maybe half the casualty rate of the median human driver would be fine.

But it's not. It requires constant supervision, and drivers sometimes have to take control (without the system disengaging on its own) in order to correct it from doing something unsafe.

If we had stats for what the casualty rate would be if every driver using it never took control back unless the car signaled it was going to disengage, I suspect that casualty rate would be much worse than the median human driver. But we don't have those stats, so we shouldn't trust it until we do.

This is why Waymo is safe and tolerated and Tesla FSD is not. Waymo test drivers record every time they have to take over control of the car for safety reasons. That was a metric they had to track and improve, or it would have been impossible to offer people rides without someone in the driver's seat.

62. kelnos ◴[] No.41894294{5}[source]
Being in a vehicle that collides with someone and kills them is going to be traumatic regardless of whether or not you're driving.

But it's almost certainly going to be more traumatic and more guilt-inducing if you are driving.

If I only had two choices, I would much rather my car kill someone than I kill someone with my car. I'm gonna feel bad about it either way, but one is much worse than the other.

63. kelnos ◴[] No.41894328{3}[source]
> It’s your car, so ultimately the liability is yours

No, that's not how it works. The driver and the driver's insurer are on the hook when something bad happens. The owner is not, except when the owner is also the one driving, or if the owner has been negligent with maintenance, and the crash was caused by mechanical failure related to that negligence.

If someone else is driving my car and I'm a passenger, and they hurt someone with it, the driver is liable, not me. If that "someone else" is a piece of software, and that piece of software has been licensed/certified/whatever to drive a car, why should I be liable for its failures? That piece of software needs to be insured, certainly. It doesn't matter if I'm required to insure it, or if the manufacturer is required to insure it.

Tesla FSD doesn't fit into this scenario because it's not the driver. You are still the driver when you engage FSD, because despite its name, FSD is not capable of filling that role.

replies(1): >>41903503 #
64. dageshi ◴[] No.41894346{4}[source]
I think the Waymo approach is the one that will actually deliver some measure of self driving cars that people will be comfortable to use.

It won't operate everywhere, but it will gradually expand to cover large areas and it will keep expanding till it's near ubiquitous.

I'm dubious that the Tesla approach will actually ever work.

65. kelnos ◴[] No.41894352{5}[source]
I don't think that's a useful argument.

I think we should start allowing autonomous driving when the "driver" is at least as safe as the median driver when the software is unsupervised. (Teslas may or may not be that safe when supervised, but they absolutely are not when unsupervised.)

But once we get to that point, we should absolutely ratchet those standards so automobile safety over time becomes just as safe as airline safety. Safer, if possible.

> It's stupid to hold automotive autopilots to the same standard as a plane's autopilot when a third of fatalities in cars are caused by the pilots being drunk.

That's a weird argument, because both pilots and drivers get thrown in jail if they fly/drive drunk. The standard is the same.

66. ◴[] No.41894362{5}[source]
67. cubefox ◴[] No.41894364[source]
> I think we should not be satisfied with merely “better than a human”.

The question is whether you want to outlaw automatic driving just because the system is, say, "only" 50% safer than us.

68. kelnos ◴[] No.41894377{3}[source]
Thank you for this. The number of people conflating Tesla's Autopilot with an airliner's autopilot, and expecting that use and policies and situations surrounding the two should be directly comparable, is staggering. You'd think people would be better at critical thinking with this, but... here we are.
replies(1): >>41894817 #
69. kelnos ◴[] No.41894388[source]
> Are there any other types of drivers [than human drivers]?

Waymo says yes, there are.

70. kelnos ◴[] No.41894391{3}[source]
No, that makes no sense, because we can't ensure that human drivers aren't impaired. We test and compare against the reality, not the ideal we'd prefer.
replies(1): >>41896967 #
71. kelnos ◴[] No.41894403[source]
Re: gotchas: an even easier one is that the Tesla FSD statistics don't include when the car does something unsafe and the driver intervenes and takes control, averting a crash.

How often does that happen? We have no idea. Tesla can certainly tell when a driver intervenes, but they can't count every occurrence as safety-related, because a driver might take control for all sorts of reasons.

This is why we can make stronger statements about the safety of Waymo. Their software was only tested by people trained and paid to test it, who were also recording every time they had to intervene because of safety, even if there was no crash. That's a metric they could track and improve.

72. kelnos ◴[] No.41894433[source]
I don't think the question was what we should be satisfied with or what we should aspire to. I absolutely agree with you that we should strive to make autonomous driving as safe as airline travel.

But the question was when should we allow autonomous driving on our public roads. And I think "when it's at least as safe as the median human driver" is a reasonable threshold.

(The thing about Tesla FSD is that it -- unsupervised -- would probably fall super short of that metric. FSD needs to be supervised to be safer than the median human driver, assuming that's evn currently the case, and not every driver is going to be equally good at supervising it.)

73. kelnos ◴[] No.41894441{3}[source]
No, you need an entirely common, unspecial license drive on a public road right now.
74. kelnos ◴[] No.41894458[source]
> My biggest concern is that Waymo and Tesla are juking the stats to make self-driving cars seem safer than they really are

Even intentional juking aside, you can't really compare the two.

Waymo cars drive completely autonomously, without a supervising driver in the car. If it does something unsafe, there's no one there to correct it, and it may get into a crash, in the same way a human driver doing that same unsafe thing might.

With Tesla FSD, we have no idea how good it really is. We know that a human is supervising it, and despite all the reports we see of people doing super irresponsible things while "driving" a Tesla (like taking a nap), I imagine most Tesla FSD users are actually attentively supervising for the most part. If all FSD users stopped supervising and started taking naps, I suspect the crash rate and fatality rate would start looking like the rate for the worst drivers on the road... or even worse than that.

So it's not that they're juking their stats (although they may be), it's that they don't actually have all the stats that matter. Waymo has and had those stats, because their trained human test drivers were reporting when the car did something unsafe and they had to take over. Tesla FSD users don't report when they have to do that. The data is just not there.

75. kelnos ◴[] No.41894466{4}[source]
Waymo is safe where they've mapped and trained and tested, because they track when their test drivers have to take control.

Tesla FSD is just everywhere, without any accountability or trained testing on all the roads people use them on. We have no idea how often Tesla FSD users have to take control from FSD due to a safety issue.

Waymo is objectively safer, and their entire approach is objectively safer, and is actually measurable, whereas Tesla FSD's safety cannot actually be accurately measured.

76. fma ◴[] No.41894476[source]
Flying is safer than driving but Boeing isn't getting a free pass on quality issues. Why would Tesla?
77. grecy ◴[] No.41894570{3}[source]
And it can’t legally drive a vehicle
78. Animats ◴[] No.41894817{4}[source]
Ah. Few people realize how dumb aircraft autopilots really are. Even the fanciest ones just follow a series of waypoints.

There is one exception - Garmin Safe Return. That's strictly an emergency system. If it activates, the plane is squawking emergency to ATC and and demanding that airspace and a runway be cleared for it.[1] This has been available since 2019 and does not seem to have yet been activated in an emergency.

[1] https://youtu.be/PiGkzgfR_c0?t=87

replies(1): >>41897922 #
79. jillesvangurp ◴[] No.41895039[source]
The key here is insurers. Because they pick up the bill when things go wrong. As soon as self driving becomes clearly better than humans, they'll be insisting we stop risking their money by driving ourselves whenever that is feasible. And they'll do that with price incentives. They'll happily insure you if you want to drive yourself. But you'll pay a premium. And a discount if you are happy to let the car do the driving.

Eventually, manual driving should come with a lot more scrutiny. Because once it becomes a choice rather than an economic necessity, other people on the road will want to be sure that you are not needlessly endangering them. So, stricter requirements for getting a drivers license with more training and fitness/health requirements. This too will be driven by insurers. They'll want to make sure you are fit to drive.

And of course when manual driving people get into trouble, taking away their driving license is always a possibility. The main argument against doing that right now is that a lot of people depend economically on being able to drive. But if that argument goes away, there's no reason to not be a lot stricter for e.g. driving under influence, or routinely breaking laws for speeding and other traffic violations. Think higher fines and driving license suspensions.

80. josephcsible ◴[] No.41895396{3}[source]
> They do not know where other planes are.

Yes they do. It's called TCAS.

> Or the ground.

Yes they do. It's called Auto-GCAS.

replies(1): >>41897516 #
81. josephcsible ◴[] No.41895416[source]
Aspire to, yes. But if we say "we're going to ban FSD until it's perfect, even though it already saves lives relative to the average human driver", you're making automobiles less safe.
82. tmtvl ◴[] No.41896451{5}[source]
Not to mention the possibility of requiring pedestrians and cyclists to also be connected to the same network. Anyone with access to the automotive network could track any pedestrian who passes by the vicinity of a road.
replies(1): >>41899155 #
83. penjelly ◴[] No.41896727{5}[source]
yes people would, if we had a reliable metric for safety of these systems besides engaged/disengaged. We don't, and 10x safer with the current metrics is not satisfactory.
84. akira2501 ◴[] No.41896967{4}[source]
We can sample rate of impairment. We do this quite often actually. It turns out the rate depends on the time of day.
85. V99 ◴[] No.41897516{4}[source]
Yes those are optional systems that exist, but they are unrelated to the autopilot (in at least the vast majority of avionics).

They are warning systems that humans respond to. For a TCAS RA the first thing you're doing is disengaging the autopilot.

If you tell the autopilot to fly straight into the path of a mountain, it will happily comply and kill you while the ground proximity warnings blare.

Humans make the decisions in planes. Autopilots are a useful but very basic tool, much more akin to cruise control in a 1998 Civic than a self-driving Tesla/Waymo/erc.

86. V99 ◴[] No.41897922{5}[source]
It does do that and it's pretty neat, if you have one of the very few modern turboprops or small jets that have G3000s & auto throttle to support it.

Airliners don't have this, but they have a 2nd pilot. A real-world activation needs a single-pilot operation where they're incapacitated, in one of the maybe few hundred nice-but-not-too-nice private planes it's equipped in, and a passenger is there to push it.

But this is all still largely using the current magenta line AP system, and that's how it's verifiable and certifiable. There's still no cameras or vision or AI deciding things, there are a few new bits of relatively simple standalone steps combined to get a good result.

- Pick a new magenta line to an airport (like pressing NRST Enter Enter if you have filtering set to only suitable fields)

- Pick a vertical path that intersects with the runway (Load a straight-in visual approach from the database)

- Ensure that line doesn't hit anything in the terrain/obstacle database. (Terrain warning system has all this info, not sure how it changes the plan if there is a conflict. This is probably the hardest part, with an actual decision to make).

- Look up the tower frequency in DB and broadcast messages. As you said it's telling and not asking/listening.

- Other humans know to get out of the way because this IS what's going to happen. This is normal, an emergency aircraft gets whatever it wants.

- Standard AP and autothrottle flies the newly prescribed path.

- The radio altimeter lets it know when to flare.

- Wheel weight sensors let it know to apply the brakes.

- The airport helps people out and tows the plane away, because it doesn't know how to taxi.

There's also "auto glide" on the more accessible G3x suite for planes that aren't necessarily $3m+. That will do most of the same stuff and get you almost, but not all the way, to the ground in front of a runway automatically.

replies(1): >>41899097 #
87. Animats ◴[] No.41899097{6}[source]
> and a passenger is there to push it.

I think it will also activate if the pilot is unconscious, for solo flights. It has something like a driver alertness detection system that will alarm if the pilot does nothing for too long. The pilot can reset the alarm, but if they do nothing, the auto return system takes over and lands the plane someplace.

88. Terr_ ◴[] No.41899155{6}[source]
It's hard to think of a good blend of traffic safety, privacy guarantees, and resistance to bad-actors. Having/avoiding persistent identification is certainly a factor.

Perhaps one approach would be to declare that automated systems are responsible for determining the position/speed of everything around them using regular sensors, but may elect to take hints from anonymous "notice me" marks or beacons.

89. gambiting ◴[] No.41899449{5}[source]
Ok, consider this for a second. You're a director of a hospital that owns a Therac radiotherapy machine for treating cancer. The machine is without any shadow of a doubt saving lives. People without access to it would die or have their prognosis worsen. Yet one day you get a report saying that the machine might sometimes, extremely rarely, accidentally deliver a lethal dose of radiation instead of the therapeutic one.

Do you decide to keep using the machine, or do you order it turned off until that defect can be fixed? Why yes or why not? Why does the same argument apply/not apply in the discussion about self driving cars?

(And in case you haven't heard about it - the Therac radiotherapy machine fault was a real thing, it's being used as a cautionary tell for software development but I sometimes wonder if it should be used in philosophy classes too)

90. iknowstuff ◴[] No.41899496{4}[source]
no need.
91. mvdtnz ◴[] No.41900280[source]
How about fewer accidents per distance of equivalent driving?
92. SoftTalker ◴[] No.41903503{4}[source]
Incorrect. Or at least, it varies by state. I was visiting my mother and borrowed her car, had a minor accident with it. Her insurance paid, not mine.

This is why you are required to have insurance for the cars you own. You may from time to time be driving cars you do not own, and the owners of those cars are required to have insurance for those cars, not you.