←back to thread

294 points cjr | 9 comments | | HN request time: 0.66s | source | bottom
Show context
decimalenough ◴[] No.44536914[source]
> The aircraft achieved the maximum recorded airspeed of 180 Knots IAS at about 08:08:42 UTC and immediately thereafter, the Engine 1 and Engine 2 fuel cutoff switches transitioned from RUN to CUTOFF position one after another with a time gap of 01 sec. The Engine N1 and N2 began to decrease from their take-off values as the fuel supply to the engines was cut off.

So the fuel supply was cut off intentionally. The switches in question are also built so they cannot be triggered accidentally, they need to be unlocked first by pulling them out.

> In the cockpit voice recording, one of the pilots is heard asking the other why did he cutoff. The other pilot responded that he did not do so.

And both pilots deny doing it.

It's difficult to conclude anything other than murder-suicide.

replies(25): >>44536947 #>>44536950 #>>44536951 #>>44536962 #>>44536979 #>>44537027 #>>44537520 #>>44537554 #>>44538264 #>>44538281 #>>44538337 #>>44538692 #>>44538779 #>>44538814 #>>44538840 #>>44539178 #>>44539475 #>>44539507 #>>44539508 #>>44539530 #>>44539532 #>>44539749 #>>44539950 #>>44540178 #>>44541039 #
alephnerd ◴[] No.44536951[source]
> It's difficult to conclude anything other than murder-suicide.

Is it possible it could have been an accident or a mistake by one of the pilots? How intention-proofed are engine cutoffs?

replies(2): >>44537006 #>>44537365 #
xenadu02 ◴[] No.44537365[source]
It could be defective switch springs, fatigue-induced muscle memory error, or something else. The pilot who did it saying he did not may not have realized what he did. It's pretty common under high workload when you flip the wrong switch or move a control the wrong way to think that you did what you intended to do, not what you actually did.

That said Boeing could take a page out of the Garmin GI275. When power is removed it pops up a "60s to shutdown dialog" that you can cancel. Even if you accidentally press SHUTDOWN it only switches to a 10s countdown with a "CANCEL" button.

They could insert a delay if weight on wheels is off. First engine can shutdown when commanded but second engine goes on 60s delay with EICAS warning countdown. Or just always insert a delay unless the fire handle is pulled.

Still... that has its own set of risks and failure modes to consider.

replies(4): >>44537836 #>>44538111 #>>44538204 #>>44541826 #
1. pixl97 ◴[] No.44538204[source]
When your engine catches on fire/blows apart on takeoff you want to cut fuel as fast as possible.
replies(3): >>44538267 #>>44538687 #>>44538730 #
2. OneMorePerson ◴[] No.44538267[source]
Was thinking this same thing. A minute feels like a long time to us (using a Garmin as the example said) but a decent number of airplane accidents only take a couple minutes end to end between everything being fine and the crash. Building an insulation layer between the machine and the experts who are supposed to be flying it only makes it less safe by reducing control.
3. p1mrx ◴[] No.44538687[source]
Proposed algorithm: If the flight computer thinks the engine looks "normal", then blare an alarm for x seconds before cutting the fuel.

I wonder if there have been cases where a pilot had to cut fuel before the computer could detect anything abnormal? I do realize that defining "abnormal" is the hardest part of this algorithm.

replies(3): >>44539457 #>>44539593 #>>44539663 #
4. SJC_Hacker ◴[] No.44538730[source]
If its both engines you're fucked anyway if its shortly after takeoff.

But I'm an advocate of KISS. At a certain point you have to trust the pilot is not going to something extremely stupid/suicidal. Making overly complex systems to try to protect pilots from themselves leads to even worse issues, such as the faulty software in the Boeing 737-MAX.

5. lxgr ◴[] No.44539457[source]
If the computer could tell perfectly whether the engine “looks normal” or not, there wouldn’t be any need for a switch. If it can’t, the switch most likely needs to work without delay in at least some situations.

In safety-critical engineering, you generally either automate things fully (i.e. to exceed human capabilities in all situations, not just most), or you keep them manual. Half-measures of automation kill people.

replies(2): >>44539682 #>>44541579 #
6. OneMorePerson ◴[] No.44539593[source]
The incident with Sully landing in the Hudson is an interesting one related to this. They had a dual birdstrike and both engines were totally obliterated and had no thrust at all, but it came up later in the hearing that the computer data showed that one engine still had thrust due to a faulty sensor, so that type of sensor input can't really be trusted in a true emergency/edge case, especially if a sensor malfunctions while an engine is on fire or something.

As a software engineer myself I think it's interesting that we feel software is the true solution when we wouldn't accept that solution ourselves. For example typically in a company you do code reviews and have a release gating process but also there's some exception process for quickly committing code or making adjustments when theres an outage or something. Could you imagine if the system said "hey we aren't detecting an outage, you sure about that? why don't you go take a walk and get a coffee, if you still think there's an outage in 15 minutes from now we will let you make that critical change".

7. michaelmrose ◴[] No.44539663[source]
If engine_status == normal and last_activation greater than threshold time

    warn then shut off
Else Shut off immediately End

Override warning time by toggling again.

8. michaelmrose ◴[] No.44539682{3}[source]
If the warning period is short enough is it possible it's always beneficial or is 2-3 seconds of additional fuel during a undetected fire more dangerous?
9. 7952 ◴[] No.44541579{3}[source]
But humans can't tell perfectly either and would be responding to much of the same data that automation would be.

I wonder if they could have buttons that are about the situation rather than the technical action. Have a fire response button. Or a shut down on the ground button.

But it does seem like half measure automation could be a contributing factor in a lot of crashes. Reverting to a pilot in a stressful situation is a risk, as is placing too much faith in individual sensors. And in a sense this problem applies to planes internally or to the whole air traffic system. It is a mess of expiring data being consumed and produced by a mix of humans and machines. Maybe the missing part is good statistical modelling of that. If systems can make better predictions they can be more cautious in response.