Most active commenters
  • michaelmrose(3)

←back to thread

314 points cjr | 22 comments | | HN request time: 0.001s | source | bottom
Show context
decimalenough ◴[] No.44536914[source]
> The aircraft achieved the maximum recorded airspeed of 180 Knots IAS at about 08:08:42 UTC and immediately thereafter, the Engine 1 and Engine 2 fuel cutoff switches transitioned from RUN to CUTOFF position one after another with a time gap of 01 sec. The Engine N1 and N2 began to decrease from their take-off values as the fuel supply to the engines was cut off.

So the fuel supply was cut off intentionally. The switches in question are also built so they cannot be triggered accidentally, they need to be unlocked first by pulling them out.

> In the cockpit voice recording, one of the pilots is heard asking the other why did he cutoff. The other pilot responded that he did not do so.

And both pilots deny doing it.

It's difficult to conclude anything other than murder-suicide.

replies(25): >>44536947 #>>44536950 #>>44536951 #>>44536962 #>>44536979 #>>44537027 #>>44537520 #>>44537554 #>>44538264 #>>44538281 #>>44538337 #>>44538692 #>>44538779 #>>44538814 #>>44538840 #>>44539178 #>>44539475 #>>44539507 #>>44539508 #>>44539530 #>>44539532 #>>44539749 #>>44539950 #>>44540178 #>>44541039 #
alephnerd ◴[] No.44536951[source]
> It's difficult to conclude anything other than murder-suicide.

Is it possible it could have been an accident or a mistake by one of the pilots? How intention-proofed are engine cutoffs?

replies(2): >>44537006 #>>44537365 #
1. xenadu02 ◴[] No.44537365[source]
It could be defective switch springs, fatigue-induced muscle memory error, or something else. The pilot who did it saying he did not may not have realized what he did. It's pretty common under high workload when you flip the wrong switch or move a control the wrong way to think that you did what you intended to do, not what you actually did.

That said Boeing could take a page out of the Garmin GI275. When power is removed it pops up a "60s to shutdown dialog" that you can cancel. Even if you accidentally press SHUTDOWN it only switches to a 10s countdown with a "CANCEL" button.

They could insert a delay if weight on wheels is off. First engine can shutdown when commanded but second engine goes on 60s delay with EICAS warning countdown. Or just always insert a delay unless the fire handle is pulled.

Still... that has its own set of risks and failure modes to consider.

replies(4): >>44537836 #>>44538111 #>>44538204 #>>44541826 #
2. rogerrogerr ◴[] No.44537836[source]
Delay is probably worse - now you're further disassociating the effect of the action from the action itself, breaking the usual rule: if you change something, and don't like the effect, change it back.
replies(1): >>44539803 #
3. aerospace83 ◴[] No.44538111[source]
Armchair safety/human factors engineering, gotta love HN.
replies(2): >>44538342 #>>44539173 #
4. pixl97 ◴[] No.44538204[source]
When your engine catches on fire/blows apart on takeoff you want to cut fuel as fast as possible.
replies(3): >>44538267 #>>44538687 #>>44538730 #
5. OneMorePerson ◴[] No.44538267[source]
Was thinking this same thing. A minute feels like a long time to us (using a Garmin as the example said) but a decent number of airplane accidents only take a couple minutes end to end between everything being fine and the crash. Building an insulation layer between the machine and the experts who are supposed to be flying it only makes it less safe by reducing control.
6. zahlman ◴[] No.44538342[source]
This is a place that puts "Hacker" in the name despite the stigma in the mainstream. Given the intended meaning of the term, I would naturally expect this to be a place where people can speculate and reason from first principles, on the information available to them, in search of some kind of insight, without being shamed for it.

You don't have to like that culture and you also don't have to participate in it. Making a throwaway account to complain about it is not eusocial behaviour, however. If you know something to be wrong with someone else's reasoning, the expected response is to highlight the flaw.

replies(3): >>44538912 #>>44538954 #>>44538958 #
7. p1mrx ◴[] No.44538687[source]
Proposed algorithm: If the flight computer thinks the engine looks "normal", then blare an alarm for x seconds before cutting the fuel.

I wonder if there have been cases where a pilot had to cut fuel before the computer could detect anything abnormal? I do realize that defining "abnormal" is the hardest part of this algorithm.

replies(3): >>44539457 #>>44539593 #>>44539663 #
8. SJC_Hacker ◴[] No.44538730[source]
If its both engines you're fucked anyway if its shortly after takeoff.

But I'm an advocate of KISS. At a certain point you have to trust the pilot is not going to something extremely stupid/suicidal. Making overly complex systems to try to protect pilots from themselves leads to even worse issues, such as the faulty software in the Boeing 737-MAX.

9. macintux ◴[] No.44538912{3}[source]
For me it's mainly about intent/unearned confidence.

If someone is speculating about how such a problem might be solved while not trying to conceal their lack of direct experience, I'm fine with it, but not everyone is.

If someone is accusing the designers of being idiots, with the fix "obvious" because reasons, well, yeah, that's unhelpful.

replies(1): >>44539691 #
10. sdgsdgssdg ◴[] No.44538954{3}[source]
(Different user here) Hacker News' "culture" is one of VC tech bros trying to identify monopolies to exploit, presumably so they can be buried with all their money when they die. There's less critical thinking here than you'd find in comments sections for major newspapers.
replies(1): >>44539565 #
11. aerospace83 ◴[] No.44538958{3}[source]
> That said Boeing could take a page out of the Garmin GI275

This is not "reasoning from first principles". In fact, I don't think there is any reasoning in the comment.

There is an implication that an obvious solution exists, and then a brief description of said solution.

I am all for speculation and reasoning outside of one's domain, but not low quality commentary like "ugh can't you just do what garmin did".

This is not a throwaway, I'm a lurker, but was compelled to comment. IMHO HN is not the place for "throwaway" ad hominems.

replies(1): >>44540314 #
12. mitthrowaway2 ◴[] No.44539173[source]
Yeah, people shouldn't bat ideas around and read replies from other people about why those ideas wouldn't work. Somebody might learn something, and that would be bad.
13. lxgr ◴[] No.44539457{3}[source]
If the computer could tell perfectly whether the engine “looks normal” or not, there wouldn’t be any need for a switch. If it can’t, the switch most likely needs to work without delay in at least some situations.

In safety-critical engineering, you generally either automate things fully (i.e. to exceed human capabilities in all situations, not just most), or you keep them manual. Half-measures of automation kill people.

replies(2): >>44539682 #>>44541579 #
14. dale_huevo ◴[] No.44539565{4}[source]
If Boeing only had the foresight to hire an army of HN webshitters to design the cockpit, this disaster could have been averted.

All the controls would be on a giant touchscreen, with the fuel switches behind a hamburger button (that responded poorly and erratically to touch gestures). Even a suicidal pilot wouldn't be able to activate it.

15. OneMorePerson ◴[] No.44539593{3}[source]
The incident with Sully landing in the Hudson is an interesting one related to this. They had a dual birdstrike and both engines were totally obliterated and had no thrust at all, but it came up later in the hearing that the computer data showed that one engine still had thrust due to a faulty sensor, so that type of sensor input can't really be trusted in a true emergency/edge case, especially if a sensor malfunctions while an engine is on fire or something.

As a software engineer myself I think it's interesting that we feel software is the true solution when we wouldn't accept that solution ourselves. For example typically in a company you do code reviews and have a release gating process but also there's some exception process for quickly committing code or making adjustments when theres an outage or something. Could you imagine if the system said "hey we aren't detecting an outage, you sure about that? why don't you go take a walk and get a coffee, if you still think there's an outage in 15 minutes from now we will let you make that critical change".

16. michaelmrose ◴[] No.44539663{3}[source]
If engine_status == normal and last_activation greater than threshold time

    warn then shut off
Else Shut off immediately End

Override warning time by toggling again.

17. michaelmrose ◴[] No.44539682{4}[source]
If the warning period is short enough is it possible it's always beneficial or is 2-3 seconds of additional fuel during a undetected fire more dangerous?
18. michaelmrose ◴[] No.44539691{4}[source]
I don't think most think they know better but it's frankly fun to speculate and this is a casual space rather than the serious bodies tasked with actually chewing over this problem in earnest.
19. Yokolos ◴[] No.44539803[source]
This makes me wonder. Is there no audible alarm when the fuel is set to cutoff?
20. Mawr ◴[] No.44540314{4}[source]
> This is not "reasoning from first principles".

It literally is. Accidental/malicious activation can be catastrophic, therefore it must be guarded against. First principles.

The shutoff timer screen given as an example is a valid way of accomplishing it. Not directly applicable to aircraft, but that's not the point.

> "ugh can't you just do what garmin did"

That's your dishonest interpretation of a post that offers reasonable, relevant suggestions. Don't tell me I need to start quoting that post to prove so. It's right there.

21. 7952 ◴[] No.44541579{4}[source]
But humans can't tell perfectly either and would be responding to much of the same data that automation would be.

I wonder if they could have buttons that are about the situation rather than the technical action. Have a fire response button. Or a shut down on the ground button.

But it does seem like half measure automation could be a contributing factor in a lot of crashes. Reverting to a pilot in a stressful situation is a risk, as is placing too much faith in individual sensors. And in a sense this problem applies to planes internally or to the whole air traffic system. It is a mess of expiring data being consumed and produced by a mix of humans and machines. Maybe the missing part is good statistical modelling of that. If systems can make better predictions they can be more cautious in response.

22. yard2010 ◴[] No.44541826[source]
I'm doing it all the time while rebasing commits or force pushing to my branch. Sometimes I would just click the wrong buttons and end up having to stay late to clean the mess. It's a great thing I'm not a pilot. I would be dead by now.