←back to thread

Waymos crash less than human drivers

(www.understandingai.org)
345 points rbanffy | 1 comments | | HN request time: 0s | source
Show context
labrador ◴[] No.43487628[source]
I was initially skeptical about self-driving cars but I've been won over by Waymo's careful and thoughtful approach using visual cues, lidar, safety drivers and geo-fencing. That said I will never trust my life to a Tesla robotaxi that uses visual cues only and will drive into a wall painted to look like the road ahead like Wile E. Coyote. Beep beep.

Man Tests If Tesla Autopilot Will Crash Into Wall Painted to Look Like Road https://futurism.com/tesla-wall-autopilot

replies(7): >>43487811 #>>43488043 #>>43490629 #>>43490938 #>>43490978 #>>43491005 #>>43511057 #
bob1029 ◴[] No.43490938[source]
I started digging into this rabbit hole and I found it fairly telling how much energy is being expended on social media over LiDAR vs no LiDAR. Much of it feels like sock puppetry led by Tesla investors and their couterparties.

I see this whole thing is a business viability narrative wherein Tesla would be even further under water if they were forced to admit that LiDAR may possess some degree of technical superiority and could provide a reliability and safety uplift. It must have taken millions of dollars in marketing budget to erase the customer experiences around the prior models of their cars that did have this technology and performed accordingly.

replies(6): >>43491873 #>>43492003 #>>43492092 #>>43492471 #>>43492878 #>>43500829 #
x187463 ◴[] No.43492003[source]
I use FSD every day and it has driven easily 98% of the miles on my model 3. I would never let it drive unsupervised. I honestly have no idea how they think they're ready for robotaxis. FSD is an incredible driver assistance system. It's actually a joy to use, but it's simply not capable of reliable unsupervised performance. A big reason, it struggles exactly where you think it would based on a vision only system. It needs a more robust mechanism of building it's world model.

A simple example. I was coming out of a business driveway, turning left onto a two lane road. It was dark out with no nearby street lights. There was a car approaching from the left. FSD could see that a car was coming. However, from the view of a camera, it was just a ball of light. There was no reasonable way the camera could discern the distance given the brightness of the headlights. I suspected this was the case and was prepared to intervene, but left FSD on to see how it would respond. Predictably, it attempted to pull out in front of the car and risked a collision.

That kind of thing simply can not be allowed to happen with a truly autonomous vehicle and would never happen with lidar.

Hell, just this morning on my way to work FSD was going run a flashing red light. It's probably 95% accurate with flashing reds, but that needs to be 100%. That being said, my understanding is the current model being trained has better temporal understanding such that flashing lights will be more comprehensible to the system. We'll see.

replies(2): >>43495616 #>>43497280 #
1. labrador ◴[] No.43495616[source]
Your report matches many other real world reports I've read. I'm pretty good at day dreaming or thinking while driving, so having to keep my hands ready to take over while being completely alert that FSD might error would be a big downgrade in my driving experience. I'd rather drive myself where my subconscious muscle memory does the driving so my conscious mind can think about other things. Having to pay attention to what FSD was doing would be a drag and prevent me from relaxing.