Most active commenters
  • gambiting(4)
  • V99(3)

←back to thread

410 points jjulius | 25 comments | | HN request time: 1.38s | source | bottom
Show context
AlchemistCamp ◴[] No.41889077[source]
The interesting question is how good self-driving has to be before people tolerate it.

It's clear that having half the casualty rate per distance traveled of the median human driver isn't acceptable. How about a quarter? Or a tenth? Accidents caused by human drivers are one of the largest causes of injury and death, but they're not newsworthy the way an accident involving automated driving is. It's all too easy to see a potential future where many people die needlessly because technology that could save lives is regulated into a greatly reduced role.

replies(20): >>41889114 #>>41889120 #>>41889122 #>>41889128 #>>41889176 #>>41889205 #>>41889210 #>>41889249 #>>41889307 #>>41889331 #>>41889686 #>>41889898 #>>41890057 #>>41890101 #>>41890451 #>>41893035 #>>41894281 #>>41894476 #>>41895039 #>>41900280 #
1. gambiting ◴[] No.41889176[source]
>>. How about a quarter? Or a tenth?

The answer is zero. An airplane autopilot has increased the overall safety of airplanes by several orders of magnitude compared to human pilots, but literally no errors in its operation are tolerated, whether they are deadly or not. The exact same standard has to apply to cars or any automated machine for that matter. If there is any issue discovered in any car with this tech then it should be disabled worldwide until the root cause is found and eliminated.

>> It's all too easy to see a potential future where many people die needlessly because technology that could save lives is regulated into a greatly reduced role.

I really don't like this argument, because we could already prevent literally all automotive deaths tomorrow through existing technology and legislation and yet we are choosing not to do this for economic and social reasons.

replies(6): >>41889247 #>>41889255 #>>41890925 #>>41891202 #>>41891217 #>>41893571 #
2. esaym ◴[] No.41889247[source]
You can't equate airplane safety with automotive safety. I worked at an aircraft repair facility doing government contracts for a number of years. In one instance, somebody lost the toilet paper holder for one of the aircraft. This holder was simply a piece of 10 gauge wire that was bent in a way to hold it and supported by wire clamps screwed to the wall. Making a new one was easy but since it was a new part going on the aircraft we had to send it to a lab to be certified to hold a roll of toilet paper to 9 g's. In case the airplane crashed you wouldn't want a roll of toilet paper flying around I guess. And that cost $1,200.
replies(1): >>41889341 #
3. travem ◴[] No.41889255[source]
> The answer is zero

If autopilot is 10x safer then preventing its use would lead to more preventable deaths and injuries than allowing it.

I agree that it should be regulated and incidents thoroughly investigated, however letting perfect be the enemy of good leads to stagnation and lack of practical improvement and greater injury to the population as a whole.

replies(2): >>41889357 #>>41889900 #
4. gambiting ◴[] No.41889341[source]
No, I'm pretty sure I can in this regard - any automotive "autopilot" has to be held to the same standard. It's either zero accidents or nothing.
replies(1): >>41891911 #
5. gambiting ◴[] No.41889357[source]
>>If autopilot is 10x safer then preventing its use would lead to more preventable deaths and injuries than allowing it.

And yet whenever there is a problem with any plane autopilot it's preemptively disabled fleet wide and pilots have to fly manually even though we absolutely beyond a shadow of a doubt know that it's less safe.

If an automated system makes a wrong decision and it contributes to harm/death then it cannot be allowed on public roads full stop, no matter how many lives it saves otherwise.

replies(3): >>41889557 #>>41891095 #>>41891568 #
6. exe34 ◴[] No.41889557{3}[source]
> And yet whenever there is a problem with any plane autopilot it's preemptively disabled fleet wide and pilots have to fly manually even though we absolutely beyond a shadow of a doubt know that it's less safe.

just because we do something dumb in one scenario isn't a very persuasive reason to do the same in another.

> then it cannot be allowed on public roads full stop, no matter how many lives it saves otherwise.

ambulances sometimes get into accidents - we should ban all ambulances, no matter how many lives they save otherwise.

replies(1): >>41894362 #
7. penjelly ◴[] No.41889900[source]
I'd challenge the legitimacy of the claim that it's 10x safer, or even safer at all. The safety data provided isn't compelling to me, it can be games or misrepresented in various ways, as pointed out by others.
replies(1): >>41890184 #
8. yCombLinks ◴[] No.41890184{3}[source]
That claim wasn't made. It was a hypothetical, what if it was 10x safer? Then would people tolerate it.
replies(1): >>41896727 #
9. V99 ◴[] No.41890925[source]
Airplane autopilots follow a lateral & sometimes vertical path through the sky prescribed by the pilot(s). They are good at doing that. This does increase safety, because it frees up the pilot(s) from having to carefully maintain a straight 3d line through the sky for hours at a time.

But they do not listen to ATC. They do not know where other planes are. They do not keep themselves away from other planes. Or the ground. Or a flock of birds. They do not handle emergencies. They make only the most basic control-loop decisions about the control surface and power (if even autothrottle equipped, otherwise that's still the meatbag's job) changes needed to follow the magenta line drawn by the pilot given a very small set of input data (position, airspeed, current control positions, etc).

The next nearest airplane is typically at least 3 miles laterally and/or 500' vertically away, because the errors allowed with all these components are measured in hundreds of feet.

None of this is even remotely comparable to a car using a dozen cameras (or lidar) to make real-time decisions to drive itself around imperfect public streets full of erratic drivers and other pedestrians a few feet away.

What it is a lot like is what Tesla actually sells (despite the marketing name). Yes it's "flying" the plane, but you're still responsible for making sure it's doing the right thing, the right way, and not and not going to hit anything or kill anybody.

replies(2): >>41894377 #>>41895396 #
10. CrimsonRain ◴[] No.41891095{3}[source]
So your only concern is, when something goes wrong, need someone to blame. Who cares about lives saved. Vaccines can cause adverse effects. Let's ban all of them.

If people like you were in charge of anything, we'd still be hitting rocks for fire in caves.

replies(1): >>41899449 #
11. Aloisius ◴[] No.41891202[source]
Autopilots aren't held to a zero error standard let alone a zero accident standard.
12. peterdsharpe ◴[] No.41891217[source]
> literally no errors in its operation are tolerated

Aircraft designer here, this is not true. We typically certify to <1 catastrophic failure per 1e9 flight hours. Not zero.

13. Aloisius ◴[] No.41891568{3}[source]
Depends on what one considers a "problem." As long as the autopilot's failures conditions and mitigation procedures are documented, the burden is largely shifted to the operator.

Autopilot didn't prevent slamming into a mountain? Not a problem as long as it wasn't designed to.

Crashed on landing? No problem, the manual says not to operate it below 500 feet.

Runaway pitch trim? The manual says you must constantly be monitoring the autopilot and disengage it when it's not operating as expected and to pull the autopilot and pitch trim circuit breakers. Clearly insufficient operator training is to blame.

14. murderfs ◴[] No.41891911{3}[source]
This only works for aerospace because everything and everyone is held to that standard. It's stupid to hold automotive autopilots to the same standard as a plane's autopilot when a third of fatalities in cars are caused by the pilots being drunk.
replies(1): >>41894352 #
15. AlchemistCamp ◴[] No.41893571[source]
> ”The answer is zero…”

> ”If there is any issue discovered in any car with this tech then it should be disabled worldwide until the root cause is found and eliminated.”

This would literally cost millions of needless deaths in a situation where AI drivers had 1/10th the accident injury rate of human drivers.

16. kelnos ◴[] No.41894352{4}[source]
I don't think that's a useful argument.

I think we should start allowing autonomous driving when the "driver" is at least as safe as the median driver when the software is unsupervised. (Teslas may or may not be that safe when supervised, but they absolutely are not when unsupervised.)

But once we get to that point, we should absolutely ratchet those standards so automobile safety over time becomes just as safe as airline safety. Safer, if possible.

> It's stupid to hold automotive autopilots to the same standard as a plane's autopilot when a third of fatalities in cars are caused by the pilots being drunk.

That's a weird argument, because both pilots and drivers get thrown in jail if they fly/drive drunk. The standard is the same.

17. ◴[] No.41894362{4}[source]
18. kelnos ◴[] No.41894377[source]
Thank you for this. The number of people conflating Tesla's Autopilot with an airliner's autopilot, and expecting that use and policies and situations surrounding the two should be directly comparable, is staggering. You'd think people would be better at critical thinking with this, but... here we are.
replies(1): >>41894817 #
19. Animats ◴[] No.41894817{3}[source]
Ah. Few people realize how dumb aircraft autopilots really are. Even the fanciest ones just follow a series of waypoints.

There is one exception - Garmin Safe Return. That's strictly an emergency system. If it activates, the plane is squawking emergency to ATC and and demanding that airspace and a runway be cleared for it.[1] This has been available since 2019 and does not seem to have yet been activated in an emergency.

[1] https://youtu.be/PiGkzgfR_c0?t=87

replies(1): >>41897922 #
20. josephcsible ◴[] No.41895396[source]
> They do not know where other planes are.

Yes they do. It's called TCAS.

> Or the ground.

Yes they do. It's called Auto-GCAS.

replies(1): >>41897516 #
21. penjelly ◴[] No.41896727{4}[source]
yes people would, if we had a reliable metric for safety of these systems besides engaged/disengaged. We don't, and 10x safer with the current metrics is not satisfactory.
22. V99 ◴[] No.41897516{3}[source]
Yes those are optional systems that exist, but they are unrelated to the autopilot (in at least the vast majority of avionics).

They are warning systems that humans respond to. For a TCAS RA the first thing you're doing is disengaging the autopilot.

If you tell the autopilot to fly straight into the path of a mountain, it will happily comply and kill you while the ground proximity warnings blare.

Humans make the decisions in planes. Autopilots are a useful but very basic tool, much more akin to cruise control in a 1998 Civic than a self-driving Tesla/Waymo/erc.

23. V99 ◴[] No.41897922{4}[source]
It does do that and it's pretty neat, if you have one of the very few modern turboprops or small jets that have G3000s & auto throttle to support it.

Airliners don't have this, but they have a 2nd pilot. A real-world activation needs a single-pilot operation where they're incapacitated, in one of the maybe few hundred nice-but-not-too-nice private planes it's equipped in, and a passenger is there to push it.

But this is all still largely using the current magenta line AP system, and that's how it's verifiable and certifiable. There's still no cameras or vision or AI deciding things, there are a few new bits of relatively simple standalone steps combined to get a good result.

- Pick a new magenta line to an airport (like pressing NRST Enter Enter if you have filtering set to only suitable fields)

- Pick a vertical path that intersects with the runway (Load a straight-in visual approach from the database)

- Ensure that line doesn't hit anything in the terrain/obstacle database. (Terrain warning system has all this info, not sure how it changes the plan if there is a conflict. This is probably the hardest part, with an actual decision to make).

- Look up the tower frequency in DB and broadcast messages. As you said it's telling and not asking/listening.

- Other humans know to get out of the way because this IS what's going to happen. This is normal, an emergency aircraft gets whatever it wants.

- Standard AP and autothrottle flies the newly prescribed path.

- The radio altimeter lets it know when to flare.

- Wheel weight sensors let it know to apply the brakes.

- The airport helps people out and tows the plane away, because it doesn't know how to taxi.

There's also "auto glide" on the more accessible G3x suite for planes that aren't necessarily $3m+. That will do most of the same stuff and get you almost, but not all the way, to the ground in front of a runway automatically.

replies(1): >>41899097 #
24. Animats ◴[] No.41899097{5}[source]
> and a passenger is there to push it.

I think it will also activate if the pilot is unconscious, for solo flights. It has something like a driver alertness detection system that will alarm if the pilot does nothing for too long. The pilot can reset the alarm, but if they do nothing, the auto return system takes over and lands the plane someplace.

25. gambiting ◴[] No.41899449{4}[source]
Ok, consider this for a second. You're a director of a hospital that owns a Therac radiotherapy machine for treating cancer. The machine is without any shadow of a doubt saving lives. People without access to it would die or have their prognosis worsen. Yet one day you get a report saying that the machine might sometimes, extremely rarely, accidentally deliver a lethal dose of radiation instead of the therapeutic one.

Do you decide to keep using the machine, or do you order it turned off until that defect can be fixed? Why yes or why not? Why does the same argument apply/not apply in the discussion about self driving cars?

(And in case you haven't heard about it - the Therac radiotherapy machine fault was a real thing, it's being used as a cautionary tell for software development but I sometimes wonder if it should be used in philosophy classes too)