1. If the self-driving software chooses to disengage 60 seconds before it detects an anomaly and then crashes while technically not in self-driving mode, is that a fault of the software or human backup driver? This is a problem especially with Tesla, which will disengage and let the human takeover.
2. When Waymo claims to have driven X million "rider only" miles, is that because the majority of miles are on a highway which are easy to drive with cruise control? If only 1 mile of a trip is on the end-to-end "hard parts" that require a human for getting in and out tight city streets and parking lots, while 10 miles are on the highway, it is easy to rack up "rider only" miles. But those trips are not representative of true self driving trips.
3. Selective bias. Waymo only operates in 3-4 cities and only in chosen weather conditions? It’s easy to rack up impressive safety stats when you avoid places with harsh weather, poor signage, or complicated street patterns. But that’s not representative of real-world driving conditions most people encounter daily.
The NTSB should force them to release all of the raw data so we can do our own analysis. I would compare only full self-driving trips, end on end, on days with good weather, in the 3-4 cities that Waymo operates and then see how much better they fare.