Most active commenters
  • valval(5)
  • (4)
  • johnisgood(4)
  • tverbeure(3)

←back to thread

410 points jjulius | 68 comments | | HN request time: 2.511s | source | bottom
1. rootusrootus ◴[] No.41892630[source]
I'm on my second free FSD trial, just started for me today. Gave it another shot, and it seems largely similar to the last free trial they gave. Fun party trick, surprisingly good, right up until it's not. A hallmark of AI everywhere, is how great it is and just how abruptly and catastrophically it fails occasionally.

Please, if you're going to try it, keep both hands on the wheel and your foot ready for the brake. When it goes off the rails, it usually does so in surprising ways with little warning and little time to correct. And since it's so good much of the time, you can get lulled into complacence.

I never really understand the comments from people who think it's the greatest thing ever and makes their drive less stressful. Does the opposite for me. Entertaining but exhausting to supervise.

replies(5): >>41894715 #>>41896317 #>>41896773 #>>41898129 #>>41898671 #
2. darknavi ◴[] No.41894715[source]
You slowly build a relationship with it and understand where it will fail.

I drive my 20-30 minute commutes largely with FSD, as well as our 8-10 hour road trips. It works great, but 100% needs to be supervised and is basically just nicer cruise control.

replies(4): >>41895075 #>>41895464 #>>41895891 #>>41895943 #
3. coffeefirst ◴[] No.41895075[source]
This feels like the most dangerous possible combination (not for you, just to have on the road in large numbers).

Good enough that the average user will stop paying attention, but not actually good enough to be left alone.

And when the machine goes to do something lethally dumb, you have 5 seconds to notice and intervene.

replies(2): >>41895427 #>>41895956 #
4. jvolkman ◴[] No.41895427{3}[source]
This is what Waymo realized a decade ago and what helped define their rollout strategy: https://youtu.be/tiwVMrTLUWg?t=247&si=Twi_fQJC7whg3Oey
replies(1): >>41895700 #
5. lolinder ◴[] No.41895464[source]
When an update comes out does that relationship get reset (does it start failing on things that used to work), or has it been a uniform upward march?

I'm thinking of how every SaaS product I ever have to use regularly breaks my workflow to make 'improvements'.

replies(2): >>41895741 #>>41895742 #
6. nh2 ◴[] No.41895700{4}[source]
This video is great.

It looks like Wayno really understood the problem.

It explains concisely why it's a bad idea to roll our incremental progress, how difficult the problem really is, and why you should really throw all sensors you can at it.

I also appreciate the "we don't know when it's going to be ready" attitude. It shows they have a better understanding of what their task actually is than anybody who claims "next year" every year.

replies(3): >>41895788 #>>41896208 #>>41904273 #
7. xur17 ◴[] No.41895741{3}[source]
For me it does, but only somewhat. I'm much more cautious / aware for the first few drives while I figure it out again.

I also feel like it takes a bit (5-10 minutes of driving) for it to recalibrate after an update, and it's slightly worse than usual at the very beginning. I know they have to calibrate the cameras to the car, so it might be related to that, or it could just be me getting used to its quarks.

8. bdndndndbve ◴[] No.41895742{3}[source]
I wouldn't take OP's word for it, if they really believe they know how it's going to react in every situation in the first place. Studies have shown this is a gross overestimation of their own ability to pay attention.
9. yborg ◴[] No.41895788{5}[source]
You don't get a $700B market cap by telling investors "We don't know."
replies(2): >>41895903 #>>41896777 #
10. eschneider ◴[] No.41895891[source]
"You slowly build a relationship with it and understand where it will fail."

I spent over a decade working on production computer vision products. You think you can do this, and for some percentage of failures you can. The thing is, there will ALWAYS be some percentage of failure cases where you really can't perceive anything different from a success case.

If you want to trust your life to that, fine, but I certainly wouldn't.

replies(2): >>41896009 #>>41898583 #
11. rvnx ◴[] No.41895903{6}[source]
Ironically, Robotaxis from Waymo are actually working really well. It's a true unsupervised system, very safe, used in production, where the manufacturer takes the full responsibility.

So the gradual rollout strategy is actually great.

Tesla wants to do "all or nothing", and ends up with nothing for now (example with Europe, where FSD is sold since 2016 but it is "pending regulatory approval", when actually, the problem is the tech that is not finished yet, sadly).

It's genuinely a difficult problem to solve, so it's better to do it step-by-step than a "big-bang deploy".

replies(2): >>41896634 #>>41897819 #
12. sumodm ◴[] No.41895943[source]
Something along this lines is the real danger. People will understand common failure modes and assume they have understood its behavior for most scenarios. Unlike common deterministic and even some probabilistic systems, where behavior boundaries are well behaved, there could be discontinuities in 'rarer' seen parts of the boundary. And these 'rarer' parts need not be obvious to us humans, since few pixel changes might cause wrinkles.

*vocabulary use is for a broad stroke explanation.

13. ricardobeat ◴[] No.41895956{3}[source]
Five seconds is a long time in driving, usually you’ll need to react in under 2 seconds in situations where it disengages, those never happen while going straight.
replies(1): >>41896128 #
14. sandworm101 ◴[] No.41896009{3}[source]
Or until a software update quietly resets the relationship and introduces novel failure modes. There is little more dangerous on the road than false confidence.
replies(1): >>41902513 #
15. theptip ◴[] No.41896128{4}[source]
Not if you are reading your emails…
16. trompetenaccoun ◴[] No.41896208{5}[source]
All their sensors didn't prevent them from crashing into stationary object. You'd think that would be the absolute easiest to avoid, especially with both radar and lidar on board. Accidents like that show the training data and software will be much more important than number of sensors.

https://techcrunch.com/2024/06/12/waymo-second-robotaxi-reca...

replies(1): >>41896467 #
17. tverbeure ◴[] No.41896317[source]
I just gave it another try after my last failed attempt. (https://tomverbeure.github.io/2024/05/20/Tesla-FSD-First-and...)

I still find it shockingly bad, especially in the way it reacts, or doesn’t, to the way things change around the car (think a car on the left in front of you who switches on indicators to merge in front of you) or the way it makes the most random lane changing decisions and changes it’s mind in the middle of that maneuver.

Those don’t count as disengagements, but they’re jarring and drivers around you will rightfully question your behavior.

And that’s all over just a few miles of driving in an easy environment if interstate or highway.

I totally agree that it’s an impressive party trick, but it has no business being on the road.

My experience with Waymo in SF couldn’t have been more different.

replies(4): >>41896758 #>>41896795 #>>41901241 #>>41902586 #
18. rvnx ◴[] No.41896467{6}[source]
The issue was fixed, now handling 100'000 trips per week, and all seems to go well in the last 4 months, this is 1.5 million trips.
replies(2): >>41896970 #>>41896990 #
19. mattgreenrocks ◴[] No.41896634{7}[source]
Does Tesla take full responsibility for FSD incidents?

It seemed like most players in tech a few years ago were using legal shenanigans to dodge liability here, which, to me, indicates a lack of seriousness toward the safety implications.

replies(1): >>41900980 #
20. sokoloff ◴[] No.41896758[source]
> (think a car on the left in front of you who switches on indicators to merge in front of you)

That car is signaling an intention to merge into your lane once it is safe for them to do so. What does the Tesla do (or not do) in this case that's bad?

replies(3): >>41896861 #>>41897374 #>>41900474 #
21. 650REDHAIR ◴[] No.41896773[source]
This was my experience as well. It tried to drive us (me, my wife, and my FIL) into a tree on a gentle low speed uphill turn and I’ll never trust it again.
22. zbentley ◴[] No.41896777{6}[source]
Not sure how tongue-in-cheek that was, but I think your statement is the heart of the problem. Investment money chases confidence and moonshots rather than backing organizations that pitch a more pragmatic (read: asterisks and unknowns) approach.
23. y-c-o-m-b ◴[] No.41896795[source]
> it makes the most random lane changing decisions and changes it’s mind in the middle of that maneuver.

This happened to me during my first month of trialing FSD last year and was a big contributing factor for me not subscribing. I did NOT appreciate the mess the vehicle made in this type of situation. If I saw another driver doing the same, I'd seriously question if they were intoxicated.

24. cma ◴[] No.41896861{3}[source]
Defensive driving is to assume they might not check their blindspot, etc. And just generally ease off in this situation if they would merge in tight if they began merging now.
replies(1): >>41898363 #
25. trompetenaccoun ◴[] No.41896970{7}[source]
So they had "better understanding" of the problem as the other user put it, but their software was still flawed and needed fixing. That's my point. This happened two weeks ago btw: https://www.msn.com/en-in/autos/news/waymo-self-driving-car-...

I don't mean Waymo is bad or unsafe, it's pretty cool. My point is about true automation needing data and intelligence. A lot more data than we currently have, because the problem is in the "edge" cases, the kind of situation the software has never encountered. Waymo is in the lead for now but they have fewer cars on the road, which means less data.

26. jraby3 ◴[] No.41896990{7}[source]
Any idea how many accidents and how many fatalities? And how that compares to human drivers?
27. hotspot_one ◴[] No.41897374{3}[source]
> That car is signaling an intention to merge into your lane once it is safe for them to do so.

Only under the assumption that the driver was trained in the US, to follow US traffic law, and is following that training.

For example, in the EU, you switch on the indicators when you start the merge; the indicator shows that you ARE moving.

replies(4): >>41897546 #>>41898177 #>>41900948 #>>41902241 #
28. sokoloff ◴[] No.41897546{4}[source]
That seems odd to the point of uselessness, and does not match the required training I received in Germany from my work colleagues at Daimler prior to being able to sign out company cars.

https://www.gesetze-im-internet.de/stvo_2013/__9.html seems to be the relevant law in Germany, which Google translates to "(1) Anyone wishing to turn must announce this clearly and in good time; direction indicators must be used."

replies(3): >>41897705 #>>41899319 #>>41902478 #
29. nielsole ◴[] No.41897705{5}[source]
Merging into the lane is probably better addressed by §7, with the same content: https://dejure.org/gesetze/StVO/7.html
30. nh2 ◴[] No.41897819{7}[source]
> So the gradual rollout strategy is actually great.

I think you misunderstood, or it's a terminology problem.

Waymo's point in the video is that in contrast to Tesla, they are _not_ doing gradual rollout of seemingly-working-still-often-catastropically-failing tech.

See e.g. minute 5:33 -> 6:06. They are stating that they are targeting directly the shown upper curve of safety, and that they are not aiming for the "good enough that the average user will stop paying attention, but not actually good enough to be left alone".

replies(1): >>41902542 #
31. jerb ◴[] No.41898129[source]
But it’s clearly statistically much safer (https://www.tesla.com/VehicleSafetyReport) 7 million miles before an accident w FSD vs. 1 million when disengaged. I agree I didn’t like the feel of FSD either, but the numbers speak for themselves.
replies(1): >>41898183 #
32. Zanfa ◴[] No.41898177{4}[source]
> For example, in the EU, you switch on the indicators when you start the merge; the indicator shows that you ARE moving.

In my EU country it's theoretically at least 3 seconds before initiating the move.

replies(2): >>41902499 #>>41903911 #
33. bpfrh ◴[] No.41898183[source]
Teslas numbers have biases in them which paint a wrong picture:

https://www.forbes.com/sites/bradtempleton/2023/04/26/tesla-...

They compare incomparable data(city miles vs highway miles), autopilot is also mostly used on higways which is not where most accidents happen.

34. tverbeure ◴[] No.41898363{4}[source]
That’s the issue: I would immediately slow a little bit to let the other one merge. FSD seems to be noticing something, and eventually slow down, but the action is too subtle (if at all) to signal the other guy that you’re letting them merge.
35. peutetre ◴[] No.41898583{3}[source]
Elon Musk is a technologist. He knows a lot about computers. The last thing Musk would do is trust a computer program:

https://www.nbcnews.com/tech/tech-news/musk-pushes-debunked-...

So I guess that's game over for full self-driving.

replies(2): >>41898739 #>>41900961 #
36. ◴[] No.41898671[source]
37. llamaimperative ◴[] No.41898739{4}[source]
Oooo maybe he'll get a similar treatment as Fox did versus Dominion.
38. throw4950sh06 ◴[] No.41899319{5}[source]
Maybe the guy was talking about the reality, not the theory. From my autobahn travels it seems like the Germans don't know how to turn on the blinkers.
replies(1): >>41899776 #
39. xattt ◴[] No.41899776{6}[source]
> … the Germans don’t know how turn on the blinkers.

[Insert nationality/regional area here] don’t know how to turn on the blinkers.

replies(1): >>41901455 #
40. tverbeure ◴[] No.41900474{3}[source]
What I expect it to do is to be a courteous driver, and back off a little bit to signal to the car in front that I got the message and that it's safe to merge.

FSD is already defensive to a fault, with frequent stop-and-go indecisions of when to merge onto a highway, but that's a whole other story.

A major part of safe driving is about being predictable. You either commit and claim your right of way, or you don't. In this situation, both can be signaled easily to the other party by being a bit of a jerk (e.g. accelerating to close the gap and prevent somebody else from merging) or the opposite. Both are better than not doing anything at all and keeping the other dangling in a state of uncertainty.

FSD is in an almost permanent state of being indecisive and unpredictable. It behaves like a scared teenager with a learner's permit. Again, totally different than my experience in Waymo in the urban jungle of San Francisco, who's a defensive but confident driver.

41. valval ◴[] No.41900948{4}[source]
For anyone confused, this person’s statement about the EU is total bs.
replies(1): >>41902137 #
42. valval ◴[] No.41900961{4}[source]
Yea he’s just dedicating his life on something that he knows won’t even work. What are you on about?
replies(1): >>41901059 #
43. valval ◴[] No.41900980{8}[source]
What does that mean? Tesla’s system isn’t unsupervised, so why would they take responsibility?
replies(1): >>41902194 #
44. voganmother42 ◴[] No.41901059{5}[source]
Everyone else’s life seems to be completely irrelevant
replies(1): >>41907244 #
45. avar ◴[] No.41901241[source]
[flagged]
replies(2): >>41902069 #>>41902250 #
46. throw4950sh06 ◴[] No.41901455{7}[source]
I wouldn't say so. It's a very marked difference with a sharp change the moment I drive through the border.
replies(1): >>41902611 #
47. ◴[] No.41902069{3}[source]
48. rcxdude ◴[] No.41902137{5}[source]
It's what I was taught: you switch on your indicators when you have checked that you are clear to merge and you have effectively committed. I always assume that someone who has put their indicators in is going to move according to them, whether it's clear or not.
replies(3): >>41902483 #>>41902639 #>>41903706 #
49. x3ro ◴[] No.41902194{9}[source]
I don't know, maybe because they call it "Full Self-Driving"? :)
replies(2): >>41902539 #>>41907111 #
50. ◴[] No.41902241{4}[source]
51. vkou ◴[] No.41902250{3}[source]
There's degrees to being a shitty human being.

Using your platform and millions of followers to publicly shit some random person who pissed you off is a degree of it.

Being a colossal hypocrite with your 'free speech' platform, or lying to your customers is something else.

Full mask-off throwing millions of dollars towards electing a convicted conman who is unabashedly corrupt, vindictive, nepotistic, already has a failed coup under his belt, and is running on a platform of punishing anyone who isn't a sycophant is... Also something else.

replies(1): >>41902455 #
52. whoitwas ◴[] No.41902455{4}[source]
I'm a bit more cynical and see his turn as a business move. He has considerate market captured, so he went full wackjob to capture that market.

Apparently, this doesn't reflect reality and he actually went crazy because one of his kids is trans. I have no idea because I don't know him.

53. johnisgood ◴[] No.41902478{5}[source]
I think the moral of the story is that cars may or may not turn their blinkers on. If they do, the self-driving should catch that just as easily and expect the car to switch lanes (with extreme caution).
54. johnisgood ◴[] No.41902483{6}[source]
It is what I see in practice in Eastern Europe. They signal as they are shifting lanes. Even if they turn the blinker on and then start moving 1 second later, it could be considered the same thing as 1 second is negligible.

Thus "the indicator shows that you ARE moving." is correct, at least in practice.

replies(1): >>41903662 #
55. johnisgood ◴[] No.41902499{5}[source]
As I mentioned in my other comment, 1 second is negligible, I would even dare to say that 3 seconds, is, too. For a computer it should not be, however.
56. johnisgood ◴[] No.41902513{4}[source]
Exactly. You may learn its patterns, but a software update could fuck it all up in a zillion different ways.
57. ◴[] No.41902539{10}[source]
58. espadrine ◴[] No.41902542{8}[source]
Terminology.

Since they targeted very low risk, they did a geographically-segmented rollout, starting with Phoenix, which is one of the easiest places to drive: a lot of photons for visibility, very little rain, wide roads.

59. friendzis ◴[] No.41902586[source]
> I still find it shockingly bad, especially in the way it reacts, or doesn’t, to the way things change around the car (think a car on the left in front of you who switches on indicators to merge in front of you) or the way it makes the most random lane changing decisions and changes it’s mind in the middle of that maneuver.

I have said it before, I will say it again. It seems that this software does not posses permanence, neither object nor decision.

60. xattt ◴[] No.41902611{8}[source]
I’m only saying this from my experience in Canada where every region thinks its drivers are the worst.
61. lbschenkel ◴[] No.41902639{6}[source]
I don't doubt that it's the way you have been taught, but it doesn't make any sense. The whole point of blinkers/indicator lights in cars are to signal your intentions before you do them: if you're going to signal at the same time that you do the action you're signalling, you might as well not bother.
replies(1): >>41909517 #
62. Lanolderen ◴[] No.41903662{7}[source]
It's the difference between actually purposefully blinking and blinking to avoid a fine. In the latter you just tap the blinker stalk as you're turning the wheel. If someone's trying to do a dangerousish turn (waiting for a line of cars to do an illegal U turn for example) they'll be blinking to signal intention most of the time.
63. Toorkit ◴[] No.41903706{6}[source]
I got my license in 2014, in Germany, and was taught to turn on the turn signal > check mirrors > turn your head to look over your shoulder and only then, when you're clear, do you merge.
64. tirant ◴[] No.41903911{5}[source]
In general, the requirement is the following:

a) Check for the possibility of the maneuver; b) signal the maneuver; c) perform the maneuver.

However the signaling needs to be done in a way that it helps other road users to read and act according to your maneuver, so 3 seconds seems to be a good amount of time for that.

There are, on the other hand, situations where signaling the maneuver is also desirable even though the maneuver might not be possible yet: merging into a full lane, so vehicles might free up some space to let you merge.

65. friendzis ◴[] No.41904273{5}[source]
> It looks like Wayno really understood the problem.

All they needed was one systems safety engineering student

66. valval ◴[] No.41907111{10}[source]
Doesn't really matter what they call it. The product name being descriptive of the current product or not is a different topic.

For what it's worth, I wouldn't care if they called it "Penis Enlarger 9000" if it drove me around like it now does.

67. valval ◴[] No.41907244{6}[source]
You and I might have a different view of reality. Teslas are among the safest vehicles on the road.
68. rcxdude ◴[] No.41909517{7}[source]
You signal in advance, but you check before you signal. Mirrors, signals, maneuver.