Most active commenters
  • rvnx(4)
  • valval(4)
  • kelnos(4)
  • bastawhiz(3)
  • tensor(3)

←back to thread

410 points jjulius | 40 comments | | HN request time: 0.02s | source | bottom
Show context
bastawhiz ◴[] No.41889192[source]
Lots of people are asking how good the self driving has to be before we tolerate it. I got a one month free trial of FSD and turned it off after two weeks. Quite simply: it's dangerous.

- It failed with a cryptic system error while driving

- It started making a left turn far too early that would have scraped the left side of the car on a sign. I had to manually intervene.

- In my opinion, the default setting accelerates way too aggressively. I'd call myself a fairly aggressive driver and it is too aggressive for my taste.

- It tried to make way too many right turns on red when it wasn't safe to. It would creep into the road, almost into the path of oncoming vehicles.

- It didn't merge left to make room for vehicles merging onto the highway. The vehicles then tried to cut in. The system should have avoided an unsafe situation like this in the first place.

- It would switch lanes to go faster on the highway, but then missed an exit on at least one occasion because it couldn't make it back into the right lane in time. Stupid.

After the system error, I lost all trust in FSD from Tesla. Until I ride in one and feel safe, I can't have any faith that this is a reasonable system. Hell, even autopilot does dumb shit on a regular basis. I'm grateful to be getting a car from another manufacturer this year.

replies(24): >>41889213 #>>41889323 #>>41889348 #>>41889518 #>>41889642 #>>41890213 #>>41890238 #>>41890342 #>>41890380 #>>41890407 #>>41890729 #>>41890785 #>>41890801 #>>41891175 #>>41892569 #>>41894279 #>>41894644 #>>41894722 #>>41894770 #>>41894964 #>>41895150 #>>41895291 #>>41895301 #>>41902130 #
1. frabjoused ◴[] No.41889213[source]
The thing that doesn't make sense is the numbers. If it is dangerous in your anecdotes, why don't the reported numbers show more accidents when FSD is on?

When I did the trial on my Tesla, I also noted these kinds of things and felt like I had to take control.

But at the end of the day, only the numbers matter.

replies(13): >>41889230 #>>41889234 #>>41889239 #>>41889246 #>>41889251 #>>41889279 #>>41889339 #>>41890197 #>>41890367 #>>41890522 #>>41890713 #>>41894050 #>>41895142 #
2. akira2501 ◴[] No.41889230[source]
You can measure risks without having to witness disaster.
3. ForHackernews ◴[] No.41889234[source]
Maybe other human drivers are reacting quickly and avoiding potential accidents from dangerous computer driving? That would be ironic, but I'm sure it's possible in some situations.
4. jsight ◴[] No.41889239[source]
Because it is bad enough that people really do supervise it. I see people who say that wouldn't happen because the drivers become complacent.

Maybe that could be a problem with future versions, but I don't see it happening with 12.3.x. I've also heard that driver attention monitoring is pretty good in the later versions, but I have no first hand experience yet.

replies(1): >>41890002 #
5. bastawhiz ◴[] No.41889246[source]
Is Tesla required to report system failures or the vehicle damaging itself? How do we know they're not optimizing for the benchmark (what they're legally required to report)?
replies(2): >>41889358 #>>41890792 #
6. timabdulla ◴[] No.41889251[source]
> If it is dangerous in your anecdotes, why don't the reported numbers show more accidents when FSD is on?

Even if it is true that the data show that with FSD (not Autopilot) enabled, drivers are in fewer crashes, I would be worried about other confounding factors.

For instance, I would assume that drivers are more likely to engage FSD in situations of lower complexity (less traffic, little construction or other impediments, overall lesser traffic flow control complexity, etc.) I also believe that at least initially, Tesla only released FSD to drivers with high safety scores relative to their total driver base, another obvious confounding factor.

Happy to be proven wrong though if you have a link to a recent study that goes through all of this.

replies(1): >>41890026 #
7. nkrisc ◴[] No.41889279[source]
What numbers? Who’s measuring? What are they measuring?
8. rvnx ◴[] No.41889339[source]
There is an easy way to know what is really behind the numbers: look who is paying in case of accident.

You have a Mercedes, Mercedes takes responsibility.

You have a Tesla, you take the responsibility.

Says a lot.

replies(3): >>41890160 #>>41890421 #>>41892425 #
9. rvnx ◴[] No.41889358[source]
If the question is: “was FSD activated at the time of the accident: yes/no”, they can legally claim no, for example if luckily the FSD disconnects half a second before a dangerous situation (eg: glare obstructing cameras), which may coincide exactly with the times of some accidents.
replies(1): >>41892440 #
10. valval ◴[] No.41890002[source]
Very good point. The product that requires supervision and tells the user to keep their hands on the wheel every 10 seconds is not good enough to be used unsupervised.

I wonder how things are inside your head. Are you ignorant or affected by some strong bias?

replies(1): >>41892321 #
11. valval ◴[] No.41890026[source]
Either the system causes less loss of life than a human driver or it doesn’t. The confounding factors don’t matter, as Tesla hasn’t presented a study on the subject. That’s in the future, and all stats that are being gathered right now are just that.
replies(2): >>41890292 #>>41894083 #
12. tensor ◴[] No.41890160[source]
You have a Mercedes, and you have a system that works virtually nowhere.
replies(1): >>41890605 #
13. lawn ◴[] No.41890197[source]
> The thing that doesn't make sense is the numbers.

Oh? Who are presenting the numbers?

Is a crash that fails to trigger the airbags still not counted as a crash?

What about the car turning off FSD right before a crash?

How about adjusting for factors such as age of driver and the type of miles driven?

The numbers don't make sense because they're not good comparisons and are made to make Tesla look good.

14. unbrice ◴[] No.41890292{3}[source]
> Either the system causes less loss of life than a human driver or it doesn’t. The confounding factors don’t matter.

Confounding factors are what allows one to tell appart "the system cause less loss of life" from "the system causes more loss of life yet it is only enabled in situations were fewer lives are lost".

15. gamblor956 ◴[] No.41890367[source]
The numbers collected by the NHTSA and insurance companies do show that FSD is dangerous...that's why the NHTSA started investigating and its why most insurance companies won't insure Tesla vehicles or charge significantly higher rates.

Also, Tesla is known to disable self-driving features right before collisions to give the appearance of driver fault.

And the coup de grace: if Tesla's own data showed that FSD was actually safer, they'd be shouting it from the moon, using that data to get self-driving permits in CA, and offering to assume liability if FSD actually caused an accident (like Mercedes does with its self driving system).

16. sebzim4500 ◴[] No.41890421[source]
Mercedes had the insight that if no one is able to actually use the system then it can't cause any crashes.

Technically, that is the easiest way to get a perfect safety record and journalists will seemingly just go along with the charade.

17. throwaway562if1 ◴[] No.41890522[source]
AIUI the numbers are for accidents where FSD is in control. Which means if it does a turn into oncoming traffic and the driver yanks the wheel or slams the brakes 500ms before collision, it's not considered a crash during FSD.
replies(2): >>41890770 #>>41890811 #
18. therouwboat ◴[] No.41890605{3}[source]
Better that way than "Oh it tried to run red light, but otherwise it's great."
replies(1): >>41891047 #
19. johnneville ◴[] No.41890713[source]
are there even transparent reported numbers available ?

for whatever does exist, it is also easy to imagine how they could be misleading. for instance i've disengaged FSD when i noticed i was about to be in an accident. if i couldn't recover in time, the accident would not be when FSD is on and depending on the metric, would not be reported as a FSD induced accident.

20. Uzza ◴[] No.41890770[source]
That is not correct. Tesla counts any accident within 5 seconds of Autopilot/FSD turning off as the system being involved. Regulators extend that period to 30 seconds, and Tesla must comply with that when reporting to them.
replies(1): >>41894128 #
21. Uzza ◴[] No.41890792[source]
All manufacturers have for some time been required by regulators to report any accident where an autonomous or partially autonomous system was active within 30 seconds of an accident.
replies(1): >>41892580 #
22. concordDance ◴[] No.41890811[source]
Several people in this thread have been saying this or similar. It's incorrect, from Tesla:

"To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before impact"

https://www.tesla.com/en_gb/VehicleSafetyReport

Situations which inevitably cause a crash more than 5 seconds later seem like they would be extremely rare.

replies(1): >>41894673 #
23. tensor ◴[] No.41891047{4}[source]
"Oh we tried to build it but no one bought it! So we gave up." - Mercedes before Tesla.

Perhaps FSD isn't ready for city streets yet, but it's great on the highways and I'd 1000x prefer we make progress rather than settle for the status quo garbage that the legacy makers put out. Also, human drivers are the most dangerous, by far, we need to make progress to eventual phase them out.

replies(1): >>41891619 #
24. meibo ◴[] No.41891619{5}[source]
2-ton blocks of metal that go 80mph next to me on the highway is not the place I would want people to go "fuck it let's just do it" with their new tech. Human drivers might be dangerous but adding more danger and unpredictability on top just because we can skip a few steps in the engineering process is crazy.

Maybe you have a deathwish, but I definitely don't. Your choices affect other humans in traffic.

replies(1): >>41898339 #
25. jsight ◴[] No.41892321{3}[source]
Yeah, it definitely isn't good enough to be used unsupervised. TBH, they've switched to eye and head tracking as the primary mechanism of attention monitoring now. It seems to work pretty well, now that I've had a chance to try it.

I'm not quite sure what you meant by your second paragraph, but I'm sure I have my blind spots and biases. I do have direct experience with various versions of 12.x though (12.3 and now 12.5).

26. diebeforei485 ◴[] No.41892425[source]
While I don't disagree with your point in general, it should be noted that there is more to taking responsibility than just paying. Even if Mercedes Drive Pilot was enabled, anything that involves court appearances and criminal liability is still your problem if you're in the driver's seat.
27. diebeforei485 ◴[] No.41892440{3}[source]
> To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before impact, and we count all crashes in which the incident alert indicated an airbag or other active restraint deployed.

Scroll down to Methodology at https://www.tesla.com/VehicleSafetyReport

replies(1): >>41894536 #
28. bastawhiz ◴[] No.41892580{3}[source]
My question is better rephrased as "what is legally considered an accident that needs to be reported?" If the car scrapes a barricade or curbs it hard but the airbags don't deploy and the car doesn't sense the damage, clearly they don't. There's a wide spectrum of issues up to the point where someone is injured or another car is damaged.
replies(1): >>41894118 #
29. kelnos ◴[] No.41894050[source]
Agree that only the numbers matter, but only if the numbers are comprehensive and useful.

How often does an autonomous driving system get the driver into a dicey situation, but the driver notices the bad behavior, takes control, and avoids a crash? I don't think we have publicly-available data on that at all.

You admit that you ran into some of these sorts of situations during your trial. Those situations are unacceptable. An autonomous driving system should be safer than a human driver, and should not make mistakes that a human driver would not make.

Despite all the YouTube videos out there of people doing unsafe things with Tesla FSD, I expect that most people that use it are pretty responsible, are paying attention, and are ready to take over if they notice FSD doing something wrong. But if people need to do that, it's not a safe, successful autonomous driving system. Safety means everyone can watch TV, mess around on their phone, or even take a nap, and we still end up with a lower crash rate than with human drivers.

The numbers that are available can't tell us if that would be the case. My belief is that we're absolutely not there.

30. kelnos ◴[] No.41894083{3}[source]
No, that's absolutely not how this works. Confounding factors are things that make your data not tell you what you are actually trying to understand. You can't just hand-wave that away, sorry.

Consider: what I expect is actually true based on the data is that Tesla FSD is as safe or safer than the average human driver, but only if the driver is paying attention and is ready to take over in case FSD does something unsafe, even if FSD doesn't warn the driver it needs to disengage.

That's not an autonomous driving system. Which is potentially fine, but the value prop of that system is low to me: I have to pay just as much attention as if I were driving manually, with the added problem that my attention is going to start to wander because the car is doing most of the work, and the longer the car successfully does most of the work, the more I'm going to unconsciously believe I can allow my attention to slip.

I do like current common ADAS features because they hit a good sweet spot: I still need to actively hold onto the wheel and handle initiating lane changes, turns, stopping and starting at traffic lights and stop signs, etc. I look at the ADAS as a sort of "backup" to my own driving, and not as what's primarily in control of the car. In contrast, Tesla FSD wants to be primarily in control of the car, but it's not trustworthy enough to do that without constant supervision.

replies(1): >>41901681 #
31. kelnos ◴[] No.41894118{4}[source]
And not to move the goalposts, but I think we should also be tracking any time the human driver feels they need to take control because the autonomous system did something they didn't believe was safe.

That's not a crash (fortunately!), but it is a failure of the autonomous system.

This is hard to track, though, of course: people might take over control for reasons unrelated to safety, or people may misinterpret something that's safe as unsafe. So you can't just track this from a simple "human driver took control".

32. kelnos ◴[] No.41894128{3}[source]
How about when it turns into oncoming traffic, the driver yanks the wheel, manages to get back on track, and avoids a crash? Do we know how often things like that happen? Because that's also a failure of the system, and that should affect how reliable and safe we rate these things. I expect we don't have data on that.

Also how about: it turns into oncoming traffic, but there isn't much oncoming traffic, and that traffic swerves to get out of the way, before FSD realizes what it's done and pulls back into the correct lane. We certainly don't have data on that.

33. rvnx ◴[] No.41894536{4}[source]
This is for Autopilot, which is the car following system on highways. If you are in cruise control and staying on your lane, not much is supposed to happen.

The FSD numbers are much more hidden.

The general accident rate is 1 per 400’000 miles driven.

FSD has one “critical disengagement” (aka before accident if human or safety braking doesn’t intervene) every 33 miles driven.

It means to reach unsupervised with human quality they would need to improve it 10’000 times in few months. Not saying it is impossible, just highly optimistic. In 10 years we will be there, but in 2 months, sounds a bit overpromising.

34. rvnx ◴[] No.41894673{3}[source]
This is Autopilot, not FSD which is an entirely different product
35. kybernetikos ◴[] No.41895142[source]
> But at the end of the day, only the numbers matter.

Are these the numbers reported by tesla, or by some third party?

36. tensor ◴[] No.41898339{6}[source]
It sounds like you are the one with a deathwish, because objectively by the numbers Autopilot on the highway has greatly reduced death. So you are literally advocating for more death.

You have two imperfect systems for highway driving: Autopilot with human oversight, and humans. The first has far far less death. Yet you are choosing the second.

37. valval ◴[] No.41901681{4}[source]
Like I said, the time for studies is in the future. FSD is a product in development and they know which stats they need to track in order to track progress.

You’re arguing for something that: 1. Isn’t under contention and 2. Isn’t rooted in the real world.

You’re right FSD isn’t an autonomous driving system. It’s not meant to be, right now.

replies(1): >>41905612 #
38. freejazz ◴[] No.41905612{5}[source]
> You’re right FSD isn’t an autonomous driving system

Oh, weird. Are you not aware it's called Full SELF Driving?

replies(1): >>41907254 #
39. valval ◴[] No.41907254{6}[source]
Does the brand name matter? The description should tell you all you need to know when making a purchase decision.
replies(1): >>41907371 #
40. freejazz ◴[] No.41907371{7}[source]
Yes, a company's marketing is absolutely part of the representations the company makes about a product they sell in the context of a product liability lawsuit.