> > “Autopilot appears to automatically disengage a fraction of a second before the impact as the crash becomes inevitable,”
> This is probably core to their legal strategy. No matter how much data the cars collect they can always safely destroy most because this allows them to pretend the autonomous driving systems weren’t involved in the crash.
I didn't have a reply to this until a couple days ago, when it hit me: it's a deliberate safety choice. They never concealed that AP/FSD disengages at the last second, it's always been stated: "because there's nothing else it can do". While I agree with that, the choice tells more about the possible consequences if AP/FSD remained active.
What would happen if AP/FSD were not disengaged before the crash? Suppose the cameras/sensors, the self-driving computer or the actuators got damaged to some degree on collision, not enough to make some or all of them inoperable, but enough to provoke uncontrolled movements after the crash. You may have a vehicle trying to accelerate without control, steering at random, repeatedly crashing into a wall, ... making it impossible for rescuers to reach you.
During an imminent unavoidable collision the last second of travel is basically watching the accident unfold, so if AP/FSD were at fault it would be apparent regardless of the disengagement.