The real reason I see for not running freeways until now is that the physical operational domain of for street-level autonomous operations was not large enough to warrant validating highway driving to their current standard.
Maybe my memory is failing me, but I seem to remember people saying the exact opposite here on HN when Tesla first announced/showed off their "self-driving but not really self-driving" features, saying it'll be very easy to get working on the highways, but then everything else is the tricky stuff.
On highways the kinetic energy is much greater (Waymo's reaction time is superhuman, but the car can't brake any harder.) and there isn't the option to fail safe (stop in place) like their is on normal roads.
The emergency breaking system gives you a lot of room for error in the rest of the system.
Once you’re going faster than 35mph this approach no longer works. You have lots of objects on the pavement that are false positives for the emergency breaking system so you have to turn it off.
I think anyone back then would be totally shocked that urban and suburban driving launched to the public before freeway driving.
Really it’s a common difficulty with utilitarianism. Tesla says “we will kill a small number of people with our self driving beta, but it is impossible to develop a self driving car without killing a few people in accidents, because cars crash, and overall the program will save a much larger number of lives than the number lost.”
And then it comes out that the true statement is “it is slightly more expensive to develop a self driving car without killing a few people in accidents” and the moral calculus tilts a bit
construction workers, delivery vehicles, traffic cones.. nothing unreasonable for it to approach with caution, brake for, and move around.
the waymo usually gets about 2 feet away from a utility truck and then sits there confused for awhile before it goes away.
it usually gets very close to these hazards before making that maneuver.
it seems like having a flashing utility strobe really messes with it and it gets extra cautious and weird around those. now, it should be respectful of emergency lights but-
i would see a problem here if it decided to do this on a freeway , five feet away from a pulled over cop or someone changing a tire.
it sure does spazz out and sit there for a long time over the emergency lights before it decides what to do
i really wish there was a third party box we could wire into strobes (or the hazard light circuit) that would universally tell an autonomous car “hey im over here somewhere you may not be expecting me , signaling for attention.”
https://enewspaper.latimes.com/infinity/article_share.aspx?g...
So then they pivoted to full time automation with a safe stop for exceptions. That's not useful to start with highway driving. There are some freeway routed mass transit lines, but for the most part people don't want to be picked up and dropped off at the freeway. In many parts of freeways, there's not a good place to stop and wait for assistance, and automated driving will need more assistance than normal driving. So it made a lot f sense to reduce scope to surface street driving.
I am empathetic to the disappointment of older vehicle owners who have been promised this capability for years and still don't see it (because their hardware just can't -- and the hardware upgrade isn't coming either).
That said, the new Y with 14.1.x really does do as claimed.
The real public isn't an internet comment section. Having your PR people spew statements about "well, other people have an obligation to use safe following distances" is unlikely to get you off the hook.
- it's easier to get to human levels of safety on freeways then on streets
- it's much harder to get to an order of magnitude better than humans on freeways than it is on streets
Freeways are significantly safer than streets when humans are driving, so "as good as humans" may be acceptable there.
One thing that's hard with highways is the fact that vehicles move faster, so in a tenth of a second at 65 mph, a car has moved 9.5 feet. So if say a big rock fell off a truck onto the highway, to detect it early and proactively brake or change lanes to avoid it, it would need to be detected at quite a long distance, which demands a lot from sensors (eg. how many pixels/LIDAR returns do you get at say 300+ feet on an object that's smaller than a car, and how much do you need to detect it as an obstruction).
But those also happen quite infrequently, so a vehicle that doesn't handle road debris (or deer or rare obstructions) can work with supervision and appear to work autonomously, but one that's fully autonomous can't skip those scenarios.
It works brilliantly, 99.5% of the time. The issue is that the failure mode is catastrophic. Like getting confused with the lane marking and driving off the shoulder. And the complete inability to read construction zone signs (blasting through a 50 KM zone at 100 KM).
I'm deeply skeptical that the current sensor suite and hardware is going to have enough compute power to safely drive without supervision.
It will no doubt improve, but until Tesla steps up and assumes liability for any accident, it's just not "full self driving".
I don't really know the answers for sure here, but there's probably a gray area where humans struggle more than the Waymo.
That sounds outrageous if true. Very strange to acknowledge you don't actually have any specific knowledge about this thing before doing a grand claim, not just "confidently", but also label it as such.
They've been publishing some stuff around latency (https://waymo.com/search?q=latency) but I'm not finding any concrete numbers, but I'd be very surprised if it was higher than the reaction time for a human, which seems to be around 400-600ms typically.
You're quite wrong. It tends to be more like 100–200 ms, which is generally significantly faster than a human's reaction.
People have lots of fears about self-driving cars, but their reaction time shouldn't be on the list.
Unlike humans they can also sense what's behind the car or other spots not directly visible to a human. They can also measure distance very precisely due to lidars (and perhaps radars too?)
A human reacts to the red light when a car breaks, without that it will take you way more time due to stereo vision to realize that a car ahead was getting closer to you.
And I am pretty sure when the car detects certain obstacles fast approaching at certain distances, or if a car ahesd of you stopped suddenly or deer jumped or w/e it breaks directly it doesn't need neural networks processing those are probably low level failsafes that are very fast to compute and definitely faster than what a human could react to
It was a common but bad hypothesis.
"If you had asked me in 2018, when I first started working in the AV industry, I would’ve bet that driverless trucks would be the first vehicle type to achieve a million-mile driverless deployment. Aurora even pivoted their entire company to trucking in 2020, believing it to be easier than city driving.
...
Stopping in lane becomes much more dangerous with the possibility of a rear-end collision at high speed. All stopping should be planned well in advance, ideally exiting at the next ramp, or at least driving to the closest shoulder with enough room to park.
This greatly increases the scope of edge cases that need to be handled autonomously and at freeway speeds.
...
The features that make freeways simpler — controlled access, no intersections, one-way traffic — also make ‘interesting’ events more rare. This is a double-edged sword. While the simpler environment reduces the number of software features to be developed, it also increases the iteration time and cost.
During development, ‘interesting’ events are needed to train data-hungry ML models. For validation, each new software version to be qualified for driverless operation needs to encounter a minimum number of ‘interesting’ events before comparisons to a human safety level can have statistical significance. Overall, iteration becomes more expensive when it takes more vehicle-hours to collect each event.”
https://kevinchen.co/blog/autonomous-trucking-harder-than-ri...
But yes, I'm sure any day now.
400-500ms is a fairly normal baseline for AV systems in my experience.
Indeed, my previously stated number was taken from here: https://news.mit.edu/2019/how-fast-humans-react-car-hazards-...
> MIT researchers have found an answer in a new study that shows humans need about 390 to 600 milliseconds to detect and react to road hazards, given only a single glance at the road — with younger drivers detecting hazards nearly twice as fast as older drivers.
But it'll be highly variable not just between individuals but state of mind, attentiveness and a whole lot of other things.
Of course the above needs about 6 times as many lanes as any city has. When you realize those massive freeways in Houston are what Des Moines needs you start to see how badly cars scale in cities.
Wait, so basically, "I don't know anything about this subject, but I'm confident regardless"?
Note, in July of this year, Musk predicted robotaxi service for half the country by the end of 2025. It's November now and they haven't even removed the safety monitors, in any city!
FSD 18 is out, 17 is garbage for babies, 18 is amazing! Wait, 19 just released, why are you still talking about 18, that shit was never gonna work, it's 19 that's nearly at unsupervised driving! Wait a second, 20 just came out...