←back to thread

Waymos crash less than human drivers

(www.understandingai.org)
345 points rbanffy | 1 comments | | HN request time: 0.653s | source
Show context
labrador ◴[] No.43487628[source]
I was initially skeptical about self-driving cars but I've been won over by Waymo's careful and thoughtful approach using visual cues, lidar, safety drivers and geo-fencing. That said I will never trust my life to a Tesla robotaxi that uses visual cues only and will drive into a wall painted to look like the road ahead like Wile E. Coyote. Beep beep.

Man Tests If Tesla Autopilot Will Crash Into Wall Painted to Look Like Road https://futurism.com/tesla-wall-autopilot

replies(7): >>43487811 #>>43488043 #>>43490629 #>>43490938 #>>43490978 #>>43491005 #>>43511057 #
bob1029 ◴[] No.43490938[source]
I started digging into this rabbit hole and I found it fairly telling how much energy is being expended on social media over LiDAR vs no LiDAR. Much of it feels like sock puppetry led by Tesla investors and their couterparties.

I see this whole thing is a business viability narrative wherein Tesla would be even further under water if they were forced to admit that LiDAR may possess some degree of technical superiority and could provide a reliability and safety uplift. It must have taken millions of dollars in marketing budget to erase the customer experiences around the prior models of their cars that did have this technology and performed accordingly.

replies(6): >>43491873 #>>43492003 #>>43492092 #>>43492471 #>>43492878 #>>43500829 #
x187463 ◴[] No.43492003[source]
I use FSD every day and it has driven easily 98% of the miles on my model 3. I would never let it drive unsupervised. I honestly have no idea how they think they're ready for robotaxis. FSD is an incredible driver assistance system. It's actually a joy to use, but it's simply not capable of reliable unsupervised performance. A big reason, it struggles exactly where you think it would based on a vision only system. It needs a more robust mechanism of building it's world model.

A simple example. I was coming out of a business driveway, turning left onto a two lane road. It was dark out with no nearby street lights. There was a car approaching from the left. FSD could see that a car was coming. However, from the view of a camera, it was just a ball of light. There was no reasonable way the camera could discern the distance given the brightness of the headlights. I suspected this was the case and was prepared to intervene, but left FSD on to see how it would respond. Predictably, it attempted to pull out in front of the car and risked a collision.

That kind of thing simply can not be allowed to happen with a truly autonomous vehicle and would never happen with lidar.

Hell, just this morning on my way to work FSD was going run a flashing red light. It's probably 95% accurate with flashing reds, but that needs to be 100%. That being said, my understanding is the current model being trained has better temporal understanding such that flashing lights will be more comprehensible to the system. We'll see.

replies(2): >>43495616 #>>43497280 #
Nemi ◴[] No.43497280[source]
And you trust that you will ALWAYS have the awareness of intervening if and when FSD does something life threatening? You are braver than I am.

I am willing to experiment in many ways with things in my life, but not WITH my life.

replies(1): >>43498510 #
1. nilkn ◴[] No.43498510[source]
I've used FSD a lot. Supervising it is a skill that you actively develop and can get very good at. Some argue that if you have to supervise it, there's no point, but I disagree. I still use it for much of my daily commute even though I have to supervise it and occasionally intervene. It's still a significant net positive addition to the driving experience for me overall. I would legitimately consider using it a skill that improves with practice; there's a threshold of skill where it becomes a huge positive, but below that threshold it can be a negative.