Most active commenters
  • bastawhiz(20)
  • kelnos(11)
  • modeless(10)
  • f1shy(9)
  • ywvcbk(9)
  • eric_cc(8)
  • rvnx(8)
  • llamaimperative(6)
  • A4ET8a8uTh0(6)
  • ndsipa_pomu(6)

←back to thread

410 points jjulius | 357 comments | | HN request time: 1.104s | source | bottom
1. bastawhiz ◴[] No.41889192[source]
Lots of people are asking how good the self driving has to be before we tolerate it. I got a one month free trial of FSD and turned it off after two weeks. Quite simply: it's dangerous.

- It failed with a cryptic system error while driving

- It started making a left turn far too early that would have scraped the left side of the car on a sign. I had to manually intervene.

- In my opinion, the default setting accelerates way too aggressively. I'd call myself a fairly aggressive driver and it is too aggressive for my taste.

- It tried to make way too many right turns on red when it wasn't safe to. It would creep into the road, almost into the path of oncoming vehicles.

- It didn't merge left to make room for vehicles merging onto the highway. The vehicles then tried to cut in. The system should have avoided an unsafe situation like this in the first place.

- It would switch lanes to go faster on the highway, but then missed an exit on at least one occasion because it couldn't make it back into the right lane in time. Stupid.

After the system error, I lost all trust in FSD from Tesla. Until I ride in one and feel safe, I can't have any faith that this is a reasonable system. Hell, even autopilot does dumb shit on a regular basis. I'm grateful to be getting a car from another manufacturer this year.

replies(24): >>41889213 #>>41889323 #>>41889348 #>>41889518 #>>41889642 #>>41890213 #>>41890238 #>>41890342 #>>41890380 #>>41890407 #>>41890729 #>>41890785 #>>41890801 #>>41891175 #>>41892569 #>>41894279 #>>41894644 #>>41894722 #>>41894770 #>>41894964 #>>41895150 #>>41895291 #>>41895301 #>>41902130 #
2. frabjoused ◴[] No.41889213[source]
The thing that doesn't make sense is the numbers. If it is dangerous in your anecdotes, why don't the reported numbers show more accidents when FSD is on?

When I did the trial on my Tesla, I also noted these kinds of things and felt like I had to take control.

But at the end of the day, only the numbers matter.

replies(13): >>41889230 #>>41889234 #>>41889239 #>>41889246 #>>41889251 #>>41889279 #>>41889339 #>>41890197 #>>41890367 #>>41890522 #>>41890713 #>>41894050 #>>41895142 #
3. akira2501 ◴[] No.41889230[source]
You can measure risks without having to witness disaster.
4. ForHackernews ◴[] No.41889234[source]
Maybe other human drivers are reacting quickly and avoiding potential accidents from dangerous computer driving? That would be ironic, but I'm sure it's possible in some situations.
5. jsight ◴[] No.41889239[source]
Because it is bad enough that people really do supervise it. I see people who say that wouldn't happen because the drivers become complacent.

Maybe that could be a problem with future versions, but I don't see it happening with 12.3.x. I've also heard that driver attention monitoring is pretty good in the later versions, but I have no first hand experience yet.

replies(1): >>41890002 #
6. bastawhiz ◴[] No.41889246[source]
Is Tesla required to report system failures or the vehicle damaging itself? How do we know they're not optimizing for the benchmark (what they're legally required to report)?
replies(2): >>41889358 #>>41890792 #
7. timabdulla ◴[] No.41889251[source]
> If it is dangerous in your anecdotes, why don't the reported numbers show more accidents when FSD is on?

Even if it is true that the data show that with FSD (not Autopilot) enabled, drivers are in fewer crashes, I would be worried about other confounding factors.

For instance, I would assume that drivers are more likely to engage FSD in situations of lower complexity (less traffic, little construction or other impediments, overall lesser traffic flow control complexity, etc.) I also believe that at least initially, Tesla only released FSD to drivers with high safety scores relative to their total driver base, another obvious confounding factor.

Happy to be proven wrong though if you have a link to a recent study that goes through all of this.

replies(1): >>41890026 #
8. nkrisc ◴[] No.41889279[source]
What numbers? Who’s measuring? What are they measuring?
9. thomastjeffery ◴[] No.41889323[source]
It's not just about relative safety compared to all human driving.

We all know that some humans are sometimes terrible drivers!

We also know what that looks like: Driving too fast or slow relative to surroundings. Quickly turning every once in a while to stay in their lane. Aggressively weaving through traffic. Going through an intersection without spending the time to actually look for pedestrians. The list goes on..

Bad human driving can be seen. Bad automated driving is invisible. Do you think the people who were about to be hit by a Tesla even realized that was the case? I sincerely doubt it.

replies(1): >>41890207 #
10. rvnx ◴[] No.41889339[source]
There is an easy way to know what is really behind the numbers: look who is paying in case of accident.

You have a Mercedes, Mercedes takes responsibility.

You have a Tesla, you take the responsibility.

Says a lot.

replies(3): >>41890160 #>>41890421 #>>41892425 #
11. dekhn ◴[] No.41889348[source]
I don't think you're supposed to merge left when people are merging on the highway into your lane- you have right of way. I find even with the right of way many people merging aren't paying attention, but I deal with that by slightly speeding up (so they can see me in front of them).
replies(2): >>41889502 #>>41890195 #
12. rvnx ◴[] No.41889358{3}[source]
If the question is: “was FSD activated at the time of the accident: yes/no”, they can legally claim no, for example if luckily the FSD disconnects half a second before a dangerous situation (eg: glare obstructing cameras), which may coincide exactly with the times of some accidents.
replies(1): >>41892440 #
13. sangnoir ◴[] No.41889502[source]
You don't have a right of way over a slow moving vehicle that merged ahead of you. Most ramps are not long enough to allow merging traffic to accelerate to highway speeds before merging, so many drivers free up the right-most lane for this purpose (by merging left)
replies(3): >>41889675 #>>41889826 #>>41890631 #
14. modeless ◴[] No.41889518[source]
Tesla jumped the gun on the FSD free trial earlier this year. It was nowhere near good enough at the time. Most people who tried it for the first time probably share your opinion.

That said, there is a night and day difference between FSD 12.3 that you experienced earlier this year and the latest version 12.6. It will still make mistakes from time to time but the improvement is massive and obvious. More importantly, the rate of improvement in the past two months has been much faster than before.

Yesterday I spent an hour in the car over three drives and did not have to turn the steering wheel at all except for parking. That never happened on 12.3. And I don't even have 12.6 yet, this is still 12.5; others report that 12.6 is a noticeable improvement over 12.5. And version 13 is scheduled for release in the next two weeks, and the FSD team has actually hit their last few release milestones.

People are right that it is still not ready yet, but if they think it will stay that way forever they are about to be very surprised. At the current rate of improvement it will be quite good within a year and in two or three I could see it actually reaching the point where it could operate unsupervised.

replies(11): >>41889570 #>>41889593 #>>41890163 #>>41890174 #>>41890177 #>>41890374 #>>41890395 #>>41890547 #>>41893442 #>>41893970 #>>41894426 #
15. seizethecheese ◴[] No.41889570[source]
If this is the case, the calls for heavy regulation in this thread will lead to many more deaths than otherwise.
16. jvanderbot ◴[] No.41889593[source]
I have yet to see a difference. I let it highway drive for an hour and it cut off a semi, coming within 9 to 12 inches of the bumper for no reason. I heard about that one believe me.

It got stuck in a side street trying to get to a target parking lot, shaking the wheel back and forth.

It's no better so far and this is the first day.

replies(3): >>41889602 #>>41889978 #>>41890441 #
17. modeless ◴[] No.41889602{3}[source]
You have 12.6?

As I said, it still makes mistakes and it is not ready yet. But 12.3 was much worse. It's the rate of improvement I am impressed with.

I will also note that the predicted epidemic of crashes from people abusing FSD never happened. It's been on the road for a long time now. The idea that it is "irresponsible" to deploy it in its current state seems conclusively disproven. You can argue about exactly what the rate of crashes is but it seems clear that it has been at the very least no worse than normal driving.

replies(1): >>41889623 #
18. jvanderbot ◴[] No.41889623{4}[source]
Hm. I thought that was the latest release but it looks like no. But there seems to be no improvements from the last trial, so maybe 12.6 is magically better.
replies(1): >>41889648 #
19. potato3732842 ◴[] No.41889642[source]
If you were a poorer driver who did these things you wouldn't find these faults so damning because it'd only be say 10% dumber than you rather than 40% or whatever (just making up those numbers).
replies(1): >>41890182 #
20. modeless ◴[] No.41889648{5}[source]
A lot of people have been getting the free trial with 12.3 still on their cars today. Tesla has really screwed up on the free trial for sure. Nobody should be getting it unless they have 12.6 at least.
replies(1): >>41889731 #
21. potato3732842 ◴[] No.41889675{3}[source]
Most ramps are more than long enough to accelerate close enough to traffic speed if one wants to, especially in most modern vehicles.
replies(1): >>41890237 #
22. jvanderbot ◴[] No.41889731{6}[source]
I have 12.5. maybe 12.6 is better but I've heard that before.

Don't get me wrong without a concerted data team building maps a priori, this is pretty incredible. But from a pure performance standpoint it's a shaky product.

replies(1): >>41889788 #
23. KaoruAoiShiho ◴[] No.41889788{7}[source]
The latest version is 12.5.6, I think he got confused by the .6 at the end. If you think that's bad then there isn't a better version available. However it is a dramatic improvement over 12.3, don't know how much you tested on it.
replies(1): >>41889893 #
24. SoftTalker ◴[] No.41889826{3}[source]
If you can safely move left to make room for merging traffic, you should. It’s considerate and reduces the chances of an accident.
25. modeless ◴[] No.41889893{8}[source]
You're right, thanks. One of the biggest updates in 12.5.6 is transitioning the highway Autopilot to FSD. If he has 12.5.4 then it may still be using the old non-FSD Autopilot on highways which would explain why he hasn't noticed improvement there; there hasn't been any until 12.5.6.
26. hilux ◴[] No.41889978{3}[source]
> ... coming within 9 to 12 inches of the bumper for no reason. I heard about that one believe me.

Oh dear.

Glad you're okay!

27. valval ◴[] No.41890002{3}[source]
Very good point. The product that requires supervision and tells the user to keep their hands on the wheel every 10 seconds is not good enough to be used unsupervised.

I wonder how things are inside your head. Are you ignorant or affected by some strong bias?

replies(1): >>41892321 #
28. valval ◴[] No.41890026{3}[source]
Either the system causes less loss of life than a human driver or it doesn’t. The confounding factors don’t matter, as Tesla hasn’t presented a study on the subject. That’s in the future, and all stats that are being gathered right now are just that.
replies(2): >>41890292 #>>41894083 #
29. tensor ◴[] No.41890160{3}[source]
You have a Mercedes, and you have a system that works virtually nowhere.
replies(1): >>41890605 #
30. snypher ◴[] No.41890163[source]
So just a few more years of death and injury until they reach a finished product?
replies(4): >>41894278 #>>41894434 #>>41895317 #>>41895493 #
31. misiti3780 ◴[] No.41890174[source]
i have the same experience 12.5 is insanely good. HN is full of people that dont want self driving to succeed for some reason. fortunately, it's clear as day to some of us that tesla approach will work
replies(4): >>41890270 #>>41890473 #>>41893961 #>>41897823 #
32. bastawhiz ◴[] No.41890177[source]
> At the current rate of improvement it will be quite good within a year

I'll believe it when I see it. I'm not sure "quite good" is the next step after "feels dangerous".

replies(1): >>41894658 #
33. bastawhiz ◴[] No.41890182[source]
That just implies FSD is as good as a bad driver, which isn't really an endorsement.
replies(1): >>41891572 #
34. bastawhiz ◴[] No.41890195[source]
Just because you have the right of way doesn't mean the correct thing to do is to remain in the lane. If remaining in your lane is likely to make someone else do something reckless, you should have been proactive. Not legally, for the sake of being a good driver.
replies(1): >>41890613 #
35. lawn ◴[] No.41890197[source]
> The thing that doesn't make sense is the numbers.

Oh? Who are presenting the numbers?

Is a crash that fails to trigger the airbags still not counted as a crash?

What about the car turning off FSD right before a crash?

How about adjusting for factors such as age of driver and the type of miles driven?

The numbers don't make sense because they're not good comparisons and are made to make Tesla look good.

36. bastawhiz ◴[] No.41890207[source]
> Bad automated driving is invisible.

I'm literally saying that it is visible, to me, the passenger. And for reasons that aren't just bad vibes. If I'm in an Uber and I feel unsafe, I'll report the driver. Why would I pay for my car to do that to me?

replies(2): >>41890261 #>>41892052 #
37. dreamcompiler ◴[] No.41890213[source]
> It didn't merge left to make room for vehicles merging onto the highway. The vehicles then tried to cut in. The system should have avoided an unsafe situation like this in the first place.

This is what bugs me about ordinary autopilot. Autopilot doesn't switch lanes, but I like to slow down or speed up as needed to allow merging cars to enter my lane. Autopilot never does that, and I've had some close calls with irate mergers who expected me to work with them. And I don't think they're wrong.

Just means that when I'm cruising in the right lane with autopilot I have to take over if a car tries to merge.

replies(4): >>41892057 #>>41894008 #>>41894575 #>>41898653 #
38. wizzwizz4 ◴[] No.41890237{4}[source]
Unless the driver in front of you didn't.
39. paulcole ◴[] No.41890238[source]
> Until I ride in one and feel safe, I can't have any faith that this is a reasonable system

This is probably the worst way to evaluate self-driving for society though, right?

replies(1): >>41892113 #
40. wizzwizz4 ◴[] No.41890261{3}[source]
GP means that the signs aren't obvious to other drivers. We generally underestimate how important psychological modelling is for communication, because it's transparent to most of us under most circumstances, but AI systems have very different psychology to humans. It is easier to interpret the body language of a fox than a self-driving car.
41. ethbr1 ◴[] No.41890270{3}[source]
Curiousity about why they're against it and enunciating your why you think it will work would be more helpful.
replies(1): >>41890659 #
42. unbrice ◴[] No.41890292{4}[source]
> Either the system causes less loss of life than a human driver or it doesn’t. The confounding factors don’t matter.

Confounding factors are what allows one to tell appart "the system cause less loss of life" from "the system causes more loss of life yet it is only enabled in situations were fewer lives are lost".

43. TheCleric ◴[] No.41890342[source]
> Lots of people are asking how good the self driving has to be before we tolerate it.

There’s a simple answer to this. As soon as it’s good enough for Tesla to accept liability for accidents. Until then if Tesla doesn’t trust it, why should I?

replies(9): >>41890435 #>>41890716 #>>41890927 #>>41891560 #>>41892829 #>>41894269 #>>41894342 #>>41894760 #>>41896173 #
44. gamblor956 ◴[] No.41890367[source]
The numbers collected by the NHTSA and insurance companies do show that FSD is dangerous...that's why the NHTSA started investigating and its why most insurance companies won't insure Tesla vehicles or charge significantly higher rates.

Also, Tesla is known to disable self-driving features right before collisions to give the appearance of driver fault.

And the coup de grace: if Tesla's own data showed that FSD was actually safer, they'd be shouting it from the moon, using that data to get self-driving permits in CA, and offering to assume liability if FSD actually caused an accident (like Mercedes does with its self driving system).

45. delusional ◴[] No.41890374[source]
> That said, there is a night and day difference between FSD 12.3 that you experienced earlier this year and the latest version 12.6

>And I don't even have 12.6 yet, this is still 12.5;

How am i supposed to take anything you say seriously when your only claim is a personal anecdote that doesn't even apply to your own argument. Please, think about what you're writing, and please stop repeating information you heard on youtube as if it's fact.

The is one of the reasons (among many) that I can't take Tesla booster seriously. I have absolutely zero faith in your anecdote that you didn't touch the steering wheel. I bet it's a lie.

replies(3): >>41890454 #>>41890462 #>>41891686 #
46. mike_d ◴[] No.41890380[source]
> Lots of people are asking how good the self driving has to be before we tolerate it.

When I feel as safe as I do sitting in the back of a Waymo.

47. wstrange ◴[] No.41890395[source]
I have a 2024 Model 3, and it's a a great car. That being said, I'm under no illusion that the car will ever be self driving (unsupervised).

12.5.6 Still fails to read very obvious signs for 30 Km/h playgrounds zones.

The current vehicles lack sufficient sensors, and likely do not have enough compute power and memory to cover all edge cases.

I think it's a matter of time before Tesla faces a lawsuit over continual FSD claims.

My hope is that the board will grow a spine and bring in a more focused CEO.

Hats off to Elon for getting Tesla to this point, but right now they need a mature (and boring) CEO.

replies(1): >>41891728 #
48. eric_cc ◴[] No.41890407[source]
That sucks that you had that negative experience. I’ve driven thousands of miles in FSD and love it. Could not imagine going back. I rarely need to intervene and when I do it’s not because the car did something dangerous. There are just times I’d rather take over due to cyclists, road construction, etc.
replies(4): >>41890549 #>>41890902 #>>41892106 #>>41897127 #
49. sebzim4500 ◴[] No.41890421{3}[source]
Mercedes had the insight that if no one is able to actually use the system then it can't cause any crashes.

Technically, that is the easiest way to get a perfect safety record and journalists will seemingly just go along with the charade.

50. genocidicbunny ◴[] No.41890435[source]
I think this is probably both the most concise and most reasonable take. It doesn't require anyone to define some level of autonomy or argue about specific edge cases of how the self driving system behaves. And it's easy to apply this principle to not only Tesla, but to all companies making self driving cars and similar features.
51. eric_cc ◴[] No.41890441{3}[source]
Is it possible you have a lemon? Genuine question. I’ve had nothing but positive experiences with FSD for the last several months and many thousands of miles.
replies(4): >>41890737 #>>41893857 #>>41894414 #>>41898678 #
52. modeless ◴[] No.41890454{3}[source]
The version I have is already a night and day difference from 12.3 and the current version is better still. Nothing I said is contradictory in the slightest. Apply some basic reasoning, please.

I didn't say I didn't touch the steering wheel. I had my hands lightly touching it most of the time, as one should for safety. I occasionally used the controls on the wheel as well as the accelerator pedal to adjust the set speed, and I used the turn signal to suggest lane changes from time to time, though most lane choices were made automatically. But I did not turn the wheel. All turning was performed by the system. (If you turn the wheel manually the system disengages). Other than parking, as I mentioned, though FSD did handle some navigation into and inside parking lots.

53. eric_cc ◴[] No.41890462{3}[source]
I can second this experience. I rarely touch the wheel anymore. I’d say I’m 98% FSD. I take over in school zones, parking lots, and complex construction.
54. eric_cc ◴[] No.41890473{3}[source]
Completely agree. It’s very strange. But honestly it’s their loss. FSD is fantastic.
replies(1): >>41895165 #
55. throwaway562if1 ◴[] No.41890522[source]
AIUI the numbers are for accidents where FSD is in control. Which means if it does a turn into oncoming traffic and the driver yanks the wheel or slams the brakes 500ms before collision, it's not considered a crash during FSD.
replies(2): >>41890770 #>>41890811 #
56. jeffbee ◴[] No.41890547[source]
If I had a dime for every hackernews who commented that FSD version X was like a revelation compared to FSD version X-ε I'd have like thirty bucks. I will grant you that every release has surprisingly different behaviors.

Here's an unintentionally hilarious meta-post on the subject https://news.ycombinator.com/item?id=29531915

replies(3): >>41890595 #>>41890815 #>>41896509 #
57. windexh8er ◴[] No.41890549[source]
I don't believe this at all. I don't own one but know about a half dozen people that got suckered into paying for FSD. All of them don't use it and 3 of them have stated it's put them in dangerous situations.

I've ridden in an X, S and Y with it on. Talk about vomit inducing when letting it drive during "city" driving. I don't doubt it's OK on highway driving, but Ford Blue Cruise and GM's Super Cruise are better there.

replies(1): >>41891787 #
58. modeless ◴[] No.41890595{3}[source]
Sure, plenty of people have been saying it's great for a long time, when it clearly was not (looking at you, Whole Mars Catalog). I was not saying it was super great back then. I have consistently been critical of Elon for promising human level self driving "next year" for like 10 years in a row and being wrong every time. He said it this year again and I still think he's wrong.

But the rate of progress I see right now has me thinking that it may not be more than two or three years before that threshold is finally reached.

replies(1): >>41890818 #
59. therouwboat ◴[] No.41890605{4}[source]
Better that way than "Oh it tried to run red light, but otherwise it's great."
replies(1): >>41891047 #
60. dekhn ◴[] No.41890613{3}[source]
Can you point to some online documentation that recommends changing lanes in preference to speeding up when a person is merging at too slow a speed? What I'm doing is following CHP guidance in this post: https://www.facebook.com/chpmarin/posts/lets-talk-about-merg... """Finally, if you are the vehicle already traveling in the slow lane, show some common courtesy and do what you can to create a space for the person by slowing down a bit or speeding up if it is safer. """

(you probably misinterpreted what I said. I do sometimes change lanes, even well in advance of a merge I know is prone to problems, if that's the safest and most convenient. What I am saying is the guidance I have read indicates that staying in the same lane is generally safer than changing lanes, and speeding up into an empty space is better for everybody than slowing down, especially because many people who are merging will keep slowing down more and more when the highway driver slows for them)

replies(2): >>41892094 #>>41892411 #
61. dekhn ◴[] No.41890631{3}[source]
Since a number of people are giving pushback, can you point to any (California-oriented) driving instructions consistent with this? I'm not seeing any. I see people saying "it's curteous", but when I'm driving I'm managing hundreds of variables and changing lanes is often risky, given motorcycles lanesplitting at high speed (quite common).
replies(2): >>41891334 #>>41891713 #
62. misiti3780 ◴[] No.41890659{4}[source]
It's evident to Tesla drivers using Full Self-Driving (FSD) that the technology is rapidly improving and will likely succeed. The key reason for this anticipated success is data: any reasonably intelligent observer recognizes that training exceptional deep neural networks requires vast amounts of data, and Tesla has accumulated more relevant data than any of its competitors. Tesla recently held a robotaxi event, explicitly informing investors of their plans to launch an autonomous competitor to Uber. While Elon Musk's timeline predictions and politics may be controversial, his ability to achieve results and attract top engineering and management talent is undeniable.
replies(5): >>41892088 #>>41893945 #>>41894709 #>>41895153 #>>41895855 #
63. johnneville ◴[] No.41890713[source]
are there even transparent reported numbers available ?

for whatever does exist, it is also easy to imagine how they could be misleading. for instance i've disengaged FSD when i noticed i was about to be in an accident. if i couldn't recover in time, the accident would not be when FSD is on and depending on the metric, would not be reported as a FSD induced accident.

64. concordDance ◴[] No.41890716[source]
Whats the current total liability cost for all Tesla drivers?

The average for all USA cars seems to be around $2000/year, so even if FSD was half as dangerous Tesla would still be paying $1000/year equivalent (not sure how big insurance margins are, assuming nominal) per car.

Now, if legally the driver could avoid paying insurance for the few times they want/need to drive themselves (e.g. snow? Dunno what FSD supports atm) then it might make sense economically, but otherwise I don't think it would work out.

replies(2): >>41890796 #>>41893401 #
65. concordDance ◴[] No.41890729[source]
This would be more helpful with a date. Was this in 2020 or 2024? I've been told FSD had a complete rearchitecting.
replies(1): >>41892091 #
66. ben_w ◴[] No.41890737{4}[source]
I've had nothing but positive experiences with ChatGPT-4o, that doesn't make people wrong to criticise either as modelling their training data too much and generalising too little when they need to use it for something where the inference domain is too far outside the training domain.
67. Uzza ◴[] No.41890770{3}[source]
That is not correct. Tesla counts any accident within 5 seconds of Autopilot/FSD turning off as the system being involved. Regulators extend that period to 30 seconds, and Tesla must comply with that when reporting to them.
replies(1): >>41894128 #
68. dchichkov ◴[] No.41890785[source]
> I'm grateful to be getting a car from another manufacturer this year.

I'm curious, what is the alternative that you are considering? I've been delaying an upgrade to electric for some time. And now, a car manufacturer that is contributing to the making of another Jan 6th, 2021 is not an option, in my opinion.

replies(2): >>41892061 #>>41894929 #
69. Uzza ◴[] No.41890792{3}[source]
All manufacturers have for some time been required by regulators to report any accident where an autonomous or partially autonomous system was active within 30 seconds of an accident.
replies(1): >>41892580 #
70. Retric ◴[] No.41890796{3}[source]
Liability alone isn’t nearly that high.

Car insurance payments include people stealing your car, uninsured motorists, rental cars, and other issues not the drivers fault. Further insurance payments also include profits for the insurance company, advertising, billing, and other overhead from running a business.

Also, if Tesla was taking on these risks you’d expect your insurance costs to drop.

replies(3): >>41890817 #>>41890872 #>>41893427 #
71. pbasista ◴[] No.41890801[source]
> I'm grateful to be getting a car from another manufacturer this year.

I have no illusions about Tesla's ability to deliver an unsupervised self-driving car any time soon. However, as far as I understand, their autosteer system, in spite of all its flaws, is still the best out there.

Do you have any reason to believe that there actually is something better?

replies(2): >>41890881 #>>41892086 #
72. concordDance ◴[] No.41890811{3}[source]
Several people in this thread have been saying this or similar. It's incorrect, from Tesla:

"To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before impact"

https://www.tesla.com/en_gb/VehicleSafetyReport

Situations which inevitably cause a crash more than 5 seconds later seem like they would be extremely rare.

replies(1): >>41894673 #
73. Laaas ◴[] No.41890815{3}[source]
Doesn’t this just mean it’s improving rapidly which is a good thing?
replies(1): >>41891562 #
74. ben_w ◴[] No.41890818{4}[source]
The most important lesson I've had from me incorrectly predicting in 2009 that we'd have cars that don't come with steering wheels in 2018, and thinking that the progress I saw each year up to then was consistent with that prediction, is that it's really hard to guess how long it takes to walk the fractal path that is software R&D.

How far are we now, 6 years later than I expected?

Dunno.

I suspect it's gonna need an invention on the same level as Diffusion or Transformer models to be able to get all the edge cases we can get, and that might mean we only get it with human level AGI.

But I don't know that, it might be we've already got all we need architecture-wise and it's just a matter of scale.

Only thing I can be really sure of is we're making progress "quite fast" in a non-objective use of the words — it's not going to need a re-run of 6 million years of mammilian evolution or anything like that, but even 20 years wall clock time would be a disappointment.

replies(1): >>41890938 #
75. TheCleric ◴[] No.41890817{4}[source]
Yeah any automaker doing this would just negotiate a flat rate per car in the US and the insurer would average the danger to make a rate. This would be much cheaper than the average individual’s cost for liability on their insurance.
replies(3): >>41892045 #>>41892322 #>>41893444 #
76. concordDance ◴[] No.41890872{4}[source]
Good points, thanks.
77. throwaway314155 ◴[] No.41890881[source]
I believe they're fine with losing auto steering capabilities, based on the tone of their comment.
78. itsoktocry ◴[] No.41890902[source]
These "works for me!" comments are exhausting. Nobody believes you "rarely intervene", otherwise Tesla themselves would be promoting the heck out of the technology.

Bring on the videos of you in the passenger seat on FSD for any amount of time.

replies(3): >>41891792 #>>41893276 #>>41900222 #
79. bdcravens ◴[] No.41890927[source]
The liability for killing someone can include prison time.
replies(3): >>41891164 #>>41894710 #>>41896926 #
80. modeless ◴[] No.41890938{5}[source]
Waymo went driverless in 2020, maybe you weren't that far off. Predicting that in 2009 would have been pretty good. They could and should have had vehicles without steering wheels anytime since then, it's just a matter of hardware development. Their steering wheel free car program was derailed when they hired traditional car company executives.
replies(1): >>41891427 #
81. tensor ◴[] No.41891047{5}[source]
"Oh we tried to build it but no one bought it! So we gave up." - Mercedes before Tesla.

Perhaps FSD isn't ready for city streets yet, but it's great on the highways and I'd 1000x prefer we make progress rather than settle for the status quo garbage that the legacy makers put out. Also, human drivers are the most dangerous, by far, we need to make progress to eventual phase them out.

replies(1): >>41891619 #
82. TheCleric ◴[] No.41891164{3}[source]
Good. If you write software that people rely on with their lives, and it fails, you should be held liable for that criminally.
replies(11): >>41891445 #>>41891631 #>>41891844 #>>41891890 #>>41892022 #>>41892572 #>>41894610 #>>41894812 #>>41895100 #>>41895710 #>>41896899 #
83. geoka9 ◴[] No.41891175[source]
> It didn't merge left to make room for vehicles merging onto the highway. The vehicles then tried to cut in. The system should have avoided an unsafe situation like this in the first place.

I've been on the receiving end of this with the offender being a Tesla so many times that I figured it must be FSD.

replies(1): >>41892065 #
84. davidcalloway ◴[] No.41891334{4}[source]
Definitely not California but literally the first part of traffic law in Germany says that caution and consideration are required from all partaking in traffic.

Germans are not known for poor driving.

replies(2): >>41891365 #>>41894173 #
85. dekhn ◴[] No.41891365{5}[source]
Right- but the "consideration" here is the person merging onto the highway actually paying attention and adjusting, rather than pointedly not even looking (this is a very common merging behavior where I life). Changing lanes isn't without risk even on a clear day with good visibility. Seems like my suggestion of slowing down or speeding up makes perfect sense because it's less risky overall, and is still being considerate.

Note that I personally do change lanes at times when it's safe, convenient, I am experienced with the intersection, and the merging driver is being especially unaware.

replies(1): >>41894061 #
86. ben_w ◴[] No.41891427{6}[source]
Waymo for sure, but I meant also without any geolock etc., so I can't claim credit for my prediction.

They may well best Tesla to this, though.

replies(1): >>41896217 #
87. beej71 ◴[] No.41891445{4}[source]
And such coders should carry malpractice insurance.
88. renewiltord ◴[] No.41891560[source]
This is how I feel about nuclear energy. Every single plant should need to form a full insurance fund dedicated to paying out if there’s trouble. And the plant should have strict liability: anything that happens from materials it releases are its responsibility.

But people get upset about this. We need corporations to take responsibility.

replies(2): >>41891771 #>>41894412 #
89. jeffbee ◴[] No.41891562{4}[source]
No, the fact that people say FSD is on the verge of readiness constantly for a decade means there is no widely shared benchmark.
90. potato3732842 ◴[] No.41891572{3}[source]
I agree it's not an endorsement but we allow chronically bad drivers on the road as long as they're legally bad and not illegally bad.
replies(1): >>41894161 #
91. meibo ◴[] No.41891619{6}[source]
2-ton blocks of metal that go 80mph next to me on the highway is not the place I would want people to go "fuck it let's just do it" with their new tech. Human drivers might be dangerous but adding more danger and unpredictability on top just because we can skip a few steps in the engineering process is crazy.

Maybe you have a deathwish, but I definitely don't. Your choices affect other humans in traffic.

replies(1): >>41898339 #
92. dmix ◴[] No.41891631{4}[source]
Drug companies and the FDA (circa 1906) play a very dangerous and delicate dance all the time releasing new drugs to the public. But for over a century now we've managed to figure it out without holding pharma companies criminally liable for every death.

> If you write software that people rely on with their lives, and it fails, you should be held liable for that criminally.

Easy to type those words on the internet than make it a policy IRL. That sort of policy IRL would likely result in a) killing off all commercial efforts to solve traffic deaths via technology and vast amounts of other semi-autonomous technology like farm equipment or b) government/car companies mandating filming the driver every time they turn it on, because it's technically supposed to be human assisted autopilot in these testing stages (outside restricted pilot programs like Waymo taxis). Those distinctions would matter in a criminal court room, even if humans can't always be relied upon to always follow the instructions on the bottle's label.

replies(3): >>41892028 #>>41892069 #>>41893456 #
93. jsjohnst ◴[] No.41891686{3}[source]
> I have absolutely zero faith in your anecdote that you didn't touch the steering wheel. I bet it's a lie.

I’m not GP, but I can share video showing it driving across residential, city, highway, and even gravel roads all in a single trip without touching the steering wheel a single time over a 90min trip (using 12.5.4.1).

replies(1): >>41892020 #
94. sangnoir ◴[] No.41891713{4}[source]
It's not just courteous, it's self serving, AFAIK, a self-emergent phenomenon. If you're driving at 65 mph and anticipate a slow down in your lane due merging traffic, do you stay in your lane and slow down to 40 mph, or do you change lanes (if it's safe to do so) and maintain your speed?

Texas highways allow for much higher merging speeds at the cost of far large (land area), 5-level interchanges rather than 35 mph offramps and onramps common in California.

Any defensive driving course (which fall under instruction IMO) states that you don't always have to exercise your right of way, and indeed it may be unsafe to do so in some circumstances. Anticipating the actions of other drivers around you and avoiding potentially dangerous are the other aspects of being a defensive driver, and those concepts are consistent with freeing up the lane slower-moving vehicles are merging onto when it's safe to do so.

95. pelorat ◴[] No.41891728{3}[source]
The board is family and friends, so them ousting him will never happen.
replies(1): >>41893514 #
96. idiotsecant ◴[] No.41891771{3}[source]
While we're at it how about why apply the same standard to coal and natural gas plants? For some reason when we start taking about nuclear plants we all of a sudden become adverse to the idea of unfunded externalities but when we're talking about 'old' tech that has been steadily irradiating your community and changing the gas composition of the entire planet it becomes less concerning.
replies(2): >>41894020 #>>41895652 #
97. eric_cc ◴[] No.41891787{3}[source]
You can believe what you want to believe. It works fantastic for me whether you believe it or not.

I do wonder if people who have wildly different experiences than I have are living in a part of the country that, for one reason or another, Tesla FSD does not yet do as well in.

replies(1): >>41894239 #
98. eric_cc ◴[] No.41891792{3}[source]
It’s the counter-point to the “it doesn’t work for me” posts. Are you okay with those ones?
replies(1): >>41894208 #
99. dansiemens ◴[] No.41891844{4}[source]
Are you suggesting that individuals should carry that liability?
replies(1): >>41893851 #
100. _rm ◴[] No.41891890{4}[source]
What a laugh, would you take that deal?

Upside: you get paid a 200k salary, if all your code works perfectly. Downside: if it doesn't, you go to prison.

The users aren't compelled to use it. They can choose not to. They get to choose their own risks.

The internet is a gold mine of creatively moronic opinions.

replies(3): >>41892070 #>>41892279 #>>41894907 #
101. jsjohnst ◴[] No.41892020{4}[source]
And if someone wants to claim I’m cherry picking the video, happy to shoot a new video with this post visible on an iPad in the seat next to me. Is it autonomous? Hell no. Can it drive in Manhattan? Nope. But can it do >80% of my regular city (suburb outside nyc) and highway driving, yep.
replies(1): >>41903350 #
102. bdcravens ◴[] No.41892022{4}[source]
Assuming there's the kind of guard rails as in other industries where this is true, absolutely. (In other words, proper licensing and credentialing, and the ability to prevent a deployment legally)

I would also say that if something gets signed off on by management, that carries an implicit transfer of accountability up the chain from the individual contributor to whoever signed off.

103. ryandrake ◴[] No.41892028{5}[source]
Your take is understandable and not surprising on a site full of software developers. Somehow, the general software industry has ingrained this pessimistic and fatalistic dogma that says bugs are inevitable and there’s nothing you can do to prevent them. Since everyone believes it, it is a self-fulfilling prophecy and we just accept it as some kind of law of nature.

Holding software developers (or their companies) liable for defects would definitely kill off a part of the industry: the very large part that YOLOs code into production and races to get features released without rigorous and exhaustive testing. And why don’t they spend 90% of their time testing and verifying and proving their software has no defects? Because defects are inevitable and they’re not held accountable for them!

replies(4): >>41892592 #>>41892653 #>>41893464 #>>41893804 #
104. ryandrake ◴[] No.41892045{5}[source]
Somehow I doubt those savings would be passed along to the individual car buyer. Surely buying a car insured by the manufacturer would be much more expensive than buying the car plus your own individual insurance, because the car company would want to profit from both.
105. thomastjeffery ◴[] No.41892052{3}[source]
We are taking about the same thing: unpredictability. If you and everyone else can't predict what your car will do, then that seems objectively unsafe to me. It also sounds like we agree with each other.
106. bastawhiz ◴[] No.41892057[source]
Agreed. Automatic lane changes are the only feature of enhanced autopilot that I think I'd be interested in, solely for this reason.
107. bastawhiz ◴[] No.41892061[source]
I've got a deposit on the Dodge Charger Daytona EV
108. bastawhiz ◴[] No.41892065[source]
Probably autopilot, honestly.
109. hilsdev ◴[] No.41892069{5}[source]
We should hold Pharma companies liable for every death. They make money off the success cases. Not doing so is another example of privatized profits and socialized risks/costs. Something like a program with reduced costs for those willing to sign away liability to help balance social good vs risk analysis
110. moralestapia ◴[] No.41892070{5}[source]
Read the site rules.

And also, of course some people would take that deal, and of course some others wouldn't. Your argument is moot.

111. bastawhiz ◴[] No.41892086[source]
Autopilot has not been good. I have a cabin four hours from my home and I've used autopilot for long stretches on the highway. Some of the problems:

- Certain exits are not detected as such and the car violently veers right before returning to the lane. I simply can't believe they don't have telemetry to remedy this.

- Sometimes the GPS becomes miscalibrated. This makes the car think I'm taking an exit when I'm not, causing the car to abruptly reduce its speed to the speed of the ramp. It does not readjust.

- It frequently slows for "emergency lights" that don't exist.

- If traffic comes to a complete stop, the car accelerates way too hard and brakes hard when the car in front moves any substantial amount.

At this point, I'd rather have something less good than something which is an active danger. For all intents and purposes, my Tesla doesn't have reliable cruise control, period.

Beyond that, though, I simply don't have trust in Tesla software. I've encountered so many problems at this point that I can't possibly expect them to deliver a product that works reliably at any point in the future. What reason do I have to believe things will magically improve?

replies(1): >>41892206 #
112. ryandrake ◴[] No.41892088{5}[source]
Then why have we been just a year or two away from actual working self-driving, for the last 10 years? If I told my boss that my project would be done in a year, and then the following year said the same thing, and continued that for years, that’s not what “achieving results” means.
113. bastawhiz ◴[] No.41892091[source]
It was a few months ago
114. bastawhiz ◴[] No.41892094{4}[source]
> recommends changing lanes in preference to speeding up when a person is merging at too slow a speed

It doesn't matter, Tesla does neither. It always does the worst possible non-malicious behavior.

115. bastawhiz ◴[] No.41892106[source]
I'm glad for you, I guess.

I'll say the autopark was kind of neat, but parking has never been something I have struggled with.

116. bastawhiz ◴[] No.41892113[source]
Why would I be supportive of a system that has actively scared me for objectively scary reasons? Even if it's the worst reason, it's not a bad reason.
replies(1): >>41894891 #
117. absoflutely ◴[] No.41892206{3}[source]
I'll add that it randomly brakes hard on the interstate because it thinks the speed limit drops to 45. There aren't speed limit signs anywhere nearby on different roads that it could be mistakenly reading either.
replies(1): >>41892575 #
118. thunky ◴[] No.41892279{5}[source]
You can go to prison or die for being a bad driver, yet people choose to drive.
replies(2): >>41892668 #>>41893006 #
119. jsight ◴[] No.41892321{4}[source]
Yeah, it definitely isn't good enough to be used unsupervised. TBH, they've switched to eye and head tracking as the primary mechanism of attention monitoring now. It seems to work pretty well, now that I've had a chance to try it.

I'm not quite sure what you meant by your second paragraph, but I'm sure I have my blind spots and biases. I do have direct experience with various versions of 12.x though (12.3 and now 12.5).

120. thedougd ◴[] No.41892322{5}[source]
And it would be supplementary to the driver’s insurance, only covering incidents that happen while FSD is engaged. Arguably they would self insure and only purchase insurance for Tesla as a back stop to their liability, maybe through a reinsurance market.
121. jazzyjackson ◴[] No.41892411{4}[source]
I read all this thread and all I can say is not everything in the world is written down somewhere
122. diebeforei485 ◴[] No.41892425{3}[source]
While I don't disagree with your point in general, it should be noted that there is more to taking responsibility than just paying. Even if Mercedes Drive Pilot was enabled, anything that involves court appearances and criminal liability is still your problem if you're in the driver's seat.
123. diebeforei485 ◴[] No.41892440{4}[source]
> To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before impact, and we count all crashes in which the incident alert indicated an airbag or other active restraint deployed.

Scroll down to Methodology at https://www.tesla.com/VehicleSafetyReport

replies(1): >>41894536 #
124. browningstreet ◴[] No.41892569[source]
Was this the last version, or the version released today?

I’ve been pretty skeptical of FSD and didn’t use the last version much. Today I used the latest test version, enabled yesterday, and rode around SF, to and from GGP, and it did really well.

Waymo well? Almost. But whereas I haven’t ridden Waymo on the highway yet, FSD got me from Hunters Point to the east bay with no disruptions.

The biggest improvement I noticed was its optimizations on highway progress.. it’ll change lanes, nicely, when the lane you’re in is slower than the surrounding lanes. And when you’re in the fast/passing lane it’ll return to the next closest lane.

Definitely better than the last release.

replies(1): >>41892600 #
125. viraptor ◴[] No.41892572{4}[source]
That's a dangerous line and I don't think it's correct. Software I write shouldn't be relied on in critical situations. If someone makes that decision then it's on them not on me.

The line should be where a person tells others that they can rely on the software with their lives - as in the integrator for the end product. Even if I was working on the software for self driving, the same thing would apply - if I wrote some alpha level stuff for the internal demonstration and some manager decided "good enough, ship it", they should be liable for that decision. (Because I wouldn't be able to stop them / may have already left by then)

replies(3): >>41892970 #>>41893594 #>>41895839 #
126. bastawhiz ◴[] No.41892575{4}[source]
I noticed that this happens when the triangle on the map is slightly offset from the road, which I've attributed to miscalibrated GPS. It happens consistently when I'm in the right lane and pass an exit when the triangle is ever so slightly misaligned.
127. bastawhiz ◴[] No.41892580{4}[source]
My question is better rephrased as "what is legally considered an accident that needs to be reported?" If the car scrapes a barricade or curbs it hard but the airbags don't deploy and the car doesn't sense the damage, clearly they don't. There's a wide spectrum of issues up to the point where someone is injured or another car is damaged.
replies(1): >>41894118 #
128. viraptor ◴[] No.41892592{6}[source]
> that says bugs are inevitable and there’s nothing you can do to prevent them

I don't think people believe this as such. It may be the short way to write it, but actually what devs mean is "bugs are inevitable at the funding/time available". I often say "bugs are inevitable" when it practice it means "you're not going to pay a team for formal specification, validated implementation and enough reliable hardware".

Which business will agree to making the process 5x longer and require extra people? Especially if they're not forced there by regulation or potential liability?

129. bastawhiz ◴[] No.41892600[source]
I'm clearly not using the FSD today because I refused to complete my free trial of it a few months ago. The post of mine that you're responding to doesn't mention my troubles with Autopilot, which I highly doubt are addressed by today's update (see my other comment for a list of problems). They need to really, really prove to me that Autopilot is working reliably before I'd even consider accepting another free trial of FSD, which I doubt they'd do anyway.
130. everforward ◴[] No.41892653{6}[source]
It is true of every field I can think of. Food gets salmonella and what not frequently. Surgeons forget sponges inside of people (and worse). Truckers run over cars. Manufacturers miss some failures in QA.

Literally everywhere else, we accept that the costs of 100% safety are just unreasonably high. People would rather have a mostly safe device for $1 than a definitely safe one for $5. No one wants to pay to have every head of lettuce tested for E Coli, or truckers to drive at 10mph so they can’t kill anyone.

Software isn’t different. For the vast majority of applications where the costs of failure are low to none, people want it to be free and rapidly iterated on even if it fails. No one wants to pay for a formally verified Facebook or DoorDash.

replies(1): >>41893620 #
131. ukuina ◴[] No.41892668{6}[source]
Systems evolve to handle such liability: Drivers pass theory and practical tests to get licensed to drive (and periodically thereafter), and an insurance framework that gauges your risk-level and charges you accordingly.
replies(2): >>41893635 #>>41894827 #
132. tiahura ◴[] No.41892829[source]
I think that’s implicit in the promise of the upcoming-any-year-now unattended full self driving.
133. presentation ◴[] No.41892970{5}[source]
To be fair maybe the software you write shouldn’t be relied on in critical situations but in this case the only place this software could be used in are critical situations
replies(1): >>41893226 #
134. _rm ◴[] No.41893006{6}[source]
Arguing for the sake of it; you wouldn't take that risk reward.

Most code has bugs from time to time even when highly skilled developers are being careful. None of them would drive if the fault rate was similar and the outcome was death.

replies(2): >>41894194 #>>41897174 #
135. viraptor ◴[] No.41893226{6}[source]
Ultimately - yes. But as I mentioned, the fact it's sold as ready for critical situations doesn't mean the developers thought/said it's ready.
replies(2): >>41893722 #>>41893726 #
136. omgwtfbyobbq ◴[] No.41893276{3}[source]
I can see it. How FSD performs depends on the environment. In some places it's great, in others I take over relatively frequently, although it's usually because it's being annoying, not because it poses any risk.

Being in the passenger seat is still off limits for obvious reasons.

137. ywvcbk ◴[] No.41893401{3}[source]
Also I wouldn’t be surprised if any potential wrongful death lawsuits could cost Tesla several magnitudes more than the current average.
138. ywvcbk ◴[] No.41893427{4}[source]
How much would every death or severe injury caused by FSD cost Tesla? We probably won’t know anytime soon but since unlike anyone else they can afford to pay out virtually unlimited amounts and courts will presumably take that into account
139. m463 ◴[] No.41893442[source]
> the rate of improvement in the past two months has been much faster than before.

I suspect the free trials let tesla collect orders of magnitude more data on events requiring human intervention. If each one is a learning event, it could exponentially improve things.

I tried it on a loaner car and thought it was pretty good.

One bit of feedback I would give tesla - when you get some sort of FSD message on the center screen, make the text BIG and either make it linger more, or let you recall it.

For example, it took me a couple tries to read the message that gave instructions on how to give tesla feedback on why you intervened.

EDIT: look at this graph

https://electrek.co/wp-content/uploads/sites/3/2024/10/Scree...

140. ywvcbk ◴[] No.41893444{5}[source]
What if someone gets killed because of some clear bug/error and the jury decides to award 100s of millions just for that single ? I’m not sure it’s trivial to insurance companies to account for that sort of risk
replies(2): >>41894378 #>>41894784 #
141. ywvcbk ◴[] No.41893456{5}[source]
> criminally liable for every death.

The fact that people generally consume drugs voluntarily and make that decision after being informed about most of the known risks probably mitigates that to some extent. Being killed by someone else’s FSD car seems to be very different

replies(2): >>41893905 #>>41894860 #
142. ywvcbk ◴[] No.41893464{6}[source]
Punishing individual developers is of course absurd (unless intent can be proven) the company itself and the upper management on the hand? Would make perfect sense.
replies(1): >>41894921 #
143. dboreham ◴[] No.41893514{4}[source]
At some point the risk of going to prison overtakes family loyalty.
replies(1): >>41894646 #
144. kergonath ◴[] No.41893594{5}[source]
It’s not that complicated or outlandish. That’s how most engineering fields work. If a building collapses because of design flaws, then the builders and architects can be held responsible. Hell, if a car crashes because of a design or assembly flaw, the manufacturer is held responsible. Why should self-driving software be any different?

If the software is not reliable enough, then don’t use it in a context where it could kill people.

replies(1): >>41894185 #
145. kergonath ◴[] No.41893620{7}[source]
> Literally everywhere else, we accept that the costs of 100% safety are just unreasonably high.

Yes, but also in none of these situations would the consumer/customer/patient be held responsible. I don’t expect a system to be perfect, but I won’t accept any liability if it malfunctions as I use it the way it is intended. And even worse, I would not accept that the designers evade their responsibilities if it kills someone I know.

As the other poster said, I am happy to consider it safe enough the day the company accepts to own its issues and the associated responsibility.

> No one wants to pay for a formally verified Facebook or DoorDash.

This is untenable. Does nobody want a formally verified avionics system in their airliner, either?

replies(1): >>41895466 #
146. kergonath ◴[] No.41893635{7}[source]
Requiring formal licensing and possibly insurance for developers working on life-critical systems is not that outlandish. On the contrary, that is already the case in serious engineering fields.
147. elric ◴[] No.41893722{7}[source]
I think it should be fairly obvious that it's not the individual developers who are responsible/liable. In critical systems there is a whole chain of liability. That one guy in Nebraska who thanklessly maintains some open source lib that BigCorp is using in their car should obviously not be liable.
replies(1): >>41894847 #
148. gmueckl ◴[] No.41893726{7}[source]
But someone slapped that label on it and made a pinky promise that it's true. That person needs to accept liability if things go wrong. If person A is loud and clear that something isn't ready, but person B tells the customer otherwise, B is at fault.

Look, there are well established procedures in a lot of industries where products are relied on to keep people safe. They all require quite rigorous development and certification processes and sneaking untested alpha quality software through such a process would be actively malicious and quite possibly criminal in and of itself, at least in some industries.

replies(1): >>41893832 #
149. tsimionescu ◴[] No.41893804{6}[source]
> And why don’t they spend 90% of their time testing and verifying and proving their software has no defects? Because defects are inevitable and they’re not held accountable for them!

For a huge part of the industry, the reason is entirely different. It is because software that mostly works today but has defects is much more valuable than software that always works and has no defects 10 years from now. Extremely well informed business customers will pay for delivering a buggy feature today rather than wait two more months for a comprehensively tested feature. This is the reality of the majority of the industry: consumers care little about bugs (below some defect rate) and care far more about timeliness.

This of course doesn't apply to critical systems like automatic drivers or medical devices. But the vast majority of the industry is not building these types of systems.

150. viraptor ◴[] No.41893832{8}[source]
This is the beginning of the thread https://news.ycombinator.com/item?id=41891164

You're in violent agreement with me ;)

replies(1): >>41893935 #
151. izacus ◴[] No.41893851{5}[source]
The ones that are identified as making decisions leading to death, yes.

It's completely normal in other fields where engineers build systems that can kill.

replies(2): >>41894849 #>>41901038 #
152. kelnos ◴[] No.41893857{4}[source]
If the incidence of problems is some relatively small number, like 5% or 10%, it's very easily possible that you've never personally seen a problem, but overall we'd still consider that the total incidence of problems is unacceptable.

Please stop presenting arguments of the form "I haven't seen problems so people who have problems must be extreme outliers". At best it's ignorant, at worst it's actively in bad faith.

153. sokoloff ◴[] No.41893905{6}[source]
Imagine that in 2031, FSD cars could exactly halve all aspects of auto crashes (minor, major, single car, multi car, vs pedestrian, fatal/non, etc.)

Would you want FSD software to be developed or not? If you do, do you think holding devs or companies criminally liable for half of all crashes is the best way to ensure that progress happens?

replies(2): >>41894272 #>>41895047 #
154. latexr ◴[] No.41893935{9}[source]
No, the beginning of the thread is earlier. And with that context it seems clear to me that the “you” in the post you linked means “the company”, not “the individual software developer”. No one else in your replies seems confused by that, we all understand self-driving software wasn’t written by a single person that has ultimate decision power within a company.
replies(1): >>41894186 #
155. kelnos ◴[] No.41893945{5}[source]
> It's evident to Tesla drivers using Full Self-Driving (FSD) that the technology is rapidly improving and will likely succeed

Sounds like Tesla drivers have been at the Kool-Aid then.

But to be a bit more serious, the problem isn't necessarily that people don't think it's improving (I do believe it is) or that they will likely succeed (I'm not sure where I stand on this). The problem is that every year Musk says the next year will be the Year of FSD. And every next year, it doesn't materialize. This is like the Boy Who Cried Wolf; Musk has zero credibility with me when it comes to predictions. And that loss of credibility affects my feeling as to whether he'll be successful at all.

On top of that, I'm not convinced that autonomous driving that only makes use of cameras will ever be reliably safer than human drivers.

replies(1): >>41896091 #
156. kelnos ◴[] No.41893961{3}[source]
> HN is full of people that dont want self driving to succeed for some reason.

I would love for self-driving to succeed. I do long-ish car trips several times a year, and it would be wonderful if instead of driving, I could be watching a movie or working on something on my laptop.

I've tried Waymo a few times, and it feels like magic, and feels safe. Their record backs up that feeling. After everything I've seen and read and heard about Tesla, if I got into a Tesla with someone who uses FSD, I'd ask them to drive manually, and probably decline the ride entirely if they wouldn't honor my request.

> fortunately, it's clear as day to some of us that tesla approach will work

And based on my experience with Tesla FSD boosters, I expect you're basing that on feelings, not on any empirical evidence or actual understanding of the hardware or software.

replies(1): >>41903313 #
157. latexr ◴[] No.41893970[source]
> At the current rate of improvement it will be quite good within a year and in two or three I could see it actually reaching the point where it could operate unsupervised.

That’s not a reasonable assumption. You can’t just extrapolate “software rate of improvement”, that’s not how it works.

replies(1): >>41895883 #
158. kelnos ◴[] No.41894008[source]
While I certainly wouldn't object to how you handle merging cars (it's a nice, helpful thing to do!), I was always taught that if you want to merge into a lane, you are the sole person responsible for making that possible and making that safe. You need to get your speed and position right, and if you can't do that, you don't merge.

(That's for merging onto a highway from an entrance ramp, at least. If you're talking about a zipper merge due to a lane ending or a lane closure, sure, cooperation with other drivers is always the right thing to do.)

replies(5): >>41894884 #>>41895052 #>>41895109 #>>41895127 #>>41895325 #
159. moooo99 ◴[] No.41894020{4}[source]
I think it is a matter of perceived risk.

Realistically speaking, nuclear power is pretty safe. In the history of nuclear power, there were two major incidents. Considering the number of nuclear power plants around the planet, that is pretty good. However, as those two accidents demonstrated, the potential fallout of those incidents is pretty severe and widespread. I think this massively contributes to the perceived risks. The warnings towards the public were pretty clear. I remember my mom telling stories from the time the Chernobyl incident became known to the public and people became worried about the produce they usually had from their gardens. Meanwhile, everything that has been done to address the hazards of fossil based power generation is pretty much happening behind the scenes.

With coal and natural gas, it seems like people perceive the risks as more abstract. The radioactive emissions of coal power plants have been known for a while and the (potential) dangers of fine particulate matters resulting from combustion are somewhat well known nowadays as well. However, the effects of those danger seem much more abstract and delayed, leading people to not be as worried about it. It also shows on a smaller, more individual scale: people still buy ICE cars at large and install gas stoves into their houses despite induction being readily available and at times even cheaper.

replies(2): >>41894445 #>>41894935 #
160. kelnos ◴[] No.41894050[source]
Agree that only the numbers matter, but only if the numbers are comprehensive and useful.

How often does an autonomous driving system get the driver into a dicey situation, but the driver notices the bad behavior, takes control, and avoids a crash? I don't think we have publicly-available data on that at all.

You admit that you ran into some of these sorts of situations during your trial. Those situations are unacceptable. An autonomous driving system should be safer than a human driver, and should not make mistakes that a human driver would not make.

Despite all the YouTube videos out there of people doing unsafe things with Tesla FSD, I expect that most people that use it are pretty responsible, are paying attention, and are ready to take over if they notice FSD doing something wrong. But if people need to do that, it's not a safe, successful autonomous driving system. Safety means everyone can watch TV, mess around on their phone, or even take a nap, and we still end up with a lower crash rate than with human drivers.

The numbers that are available can't tell us if that would be the case. My belief is that we're absolutely not there.

161. watwut ◴[] No.41894061{6}[source]
Consideration is also making space for slower car wanting to merge and Germans do it.
162. kelnos ◴[] No.41894083{4}[source]
No, that's absolutely not how this works. Confounding factors are things that make your data not tell you what you are actually trying to understand. You can't just hand-wave that away, sorry.

Consider: what I expect is actually true based on the data is that Tesla FSD is as safe or safer than the average human driver, but only if the driver is paying attention and is ready to take over in case FSD does something unsafe, even if FSD doesn't warn the driver it needs to disengage.

That's not an autonomous driving system. Which is potentially fine, but the value prop of that system is low to me: I have to pay just as much attention as if I were driving manually, with the added problem that my attention is going to start to wander because the car is doing most of the work, and the longer the car successfully does most of the work, the more I'm going to unconsciously believe I can allow my attention to slip.

I do like current common ADAS features because they hit a good sweet spot: I still need to actively hold onto the wheel and handle initiating lane changes, turns, stopping and starting at traffic lights and stop signs, etc. I look at the ADAS as a sort of "backup" to my own driving, and not as what's primarily in control of the car. In contrast, Tesla FSD wants to be primarily in control of the car, but it's not trustworthy enough to do that without constant supervision.

replies(1): >>41901681 #
163. kelnos ◴[] No.41894118{5}[source]
And not to move the goalposts, but I think we should also be tracking any time the human driver feels they need to take control because the autonomous system did something they didn't believe was safe.

That's not a crash (fortunately!), but it is a failure of the autonomous system.

This is hard to track, though, of course: people might take over control for reasons unrelated to safety, or people may misinterpret something that's safe as unsafe. So you can't just track this from a simple "human driver took control".

164. kelnos ◴[] No.41894128{4}[source]
How about when it turns into oncoming traffic, the driver yanks the wheel, manages to get back on track, and avoids a crash? Do we know how often things like that happen? Because that's also a failure of the system, and that should affect how reliable and safe we rate these things. I expect we don't have data on that.

Also how about: it turns into oncoming traffic, but there isn't much oncoming traffic, and that traffic swerves to get out of the way, before FSD realizes what it's done and pulls back into the correct lane. We certainly don't have data on that.

165. kelnos ◴[] No.41894161{4}[source]
We do that for reasons of practicality: the US is built around cars. If we were to revoke the licenses of the 20% worst drivers, most of those people would be unable to get to work and end up homeless.

So we accept that there are some bad drivers on the road because the alternative would be cruel.

But we don't have to accept bad software drivers.

replies(1): >>41898481 #
166. ◴[] No.41894173{5}[source]
167. krisoft ◴[] No.41894185{6}[source]
I think the example here is that the designer draws a bridge for a railway model, and someone decides to use the same design and sends real locomotives across it. Is the original designer (who neither intended nor could have foreseen this) liable in your understanding?
replies(3): >>41894354 #>>41894366 #>>41894816 #
168. viraptor ◴[] No.41894186{10}[source]
If the message said "you release software", or "approve" or "produce", or something like that, sure. But it said "you write software" - and I don't think that can apply to a company, because writing is what individuals do. But yeah, maybe that's not what the author meant.
replies(1): >>41894422 #
169. notahacker ◴[] No.41894194{7}[source]
Or to put even more straightforwardly: people who choose to drive rarely expect to drive more than a few 10s of k per year. People who choose to write autonomous software's lines of code potentially drive a billion miles per year, experiencing a lot more edge cases they are expected to handle in a non-dangerous manner, and have to handle them via advance planning and interactions with a lot of other people's code.

The only practical way around this which permits autonomous vehicles (which are apparently dependent on much more complex and intractable codebases than, say, avionics) is a much higher threshold of criminal responsibility than the "the serious consequences resulted from the one-off execution of an dangerous manoeuvre which couldn't be justified in context" which sends human drivers to jail. And of course that double standard will be problematic if "willingness to accept liability" is the only safety threshold.

170. kelnos ◴[] No.41894208{4}[source]
I think the problem with the "it works for me" type posts is that most people reading them think the person writing it is trying to refute what the person with the problem is saying. As in, "it works for me, so the problem must be with you, not the car".

I will refrain from commenting on whether or not that's a fair assumption to make, but I think that's where the frustration comes from.

I think when people make "WFM" posts, it would go a long way to acknowledge that the person who had a problem really did have a problem, even if implicitly.

"That's a bummer; I've driven thousands of miles using FSD, and I've felt safe and have never had to intervene. I wonder what's different about our travel that's given us such different experiences."

That kind of thing would be a lot more palatable, I think, even if you might think it's silly/tiring/whatever to have to do that every time.

171. kelnos ◴[] No.41894239{4}[source]
I think GP is going too far in calling you a liar, but I think for the most part your FSD praise is just kinda... unimportant and irrelevant. GP's aggressive attitude notwithstanding, I think most reasonable people will agree that FSD handles a lot of situations really well, and believe that some people have travel routes where FSD always handles things well.

But ok, great, so what? If that wasn't the case, FSD would be an unmitigated disaster with a body count in the tens of thousands. So in a comment thread about someone talking about the problems and unsafe behavior they've seen, a "well it works for me" reply is just annoying noise, and doesn't really add anything to the discussion.

replies(1): >>41898215 #
172. mrjin ◴[] No.41894269[source]
Even if it does, can it resurrect the deceased?
replies(1): >>41894616 #
173. blackoil ◴[] No.41894272{7}[source]
Say cars have near 0 casualty in northern hemisphere but occasionally fails for cars driving topsy turvy in south. If company knew about it and chooses to ignore it because of profits, yes they should be charged criminally.
174. quailfarmer ◴[] No.41894278{3}[source]
If the answer was yes, presumably there’s a tradeoff where that deal would be reasonable.
175. mrjin ◴[] No.41894279[source]
I would not even try. The reason is simple, there is absolutely no ability of understanding in any of current self claimed auto driving approach, no matter how well they market them.
176. ndsipa_pomu ◴[] No.41894342[source]
> As soon as it’s good enough for Tesla to accept liability for accidents.

That makes a lot of sense and not just from a selfish point of view. When a person drives a vehicle, then the person is held responsible for how the vehicle behaves on the roads, so it's logical that when a machine drives a vehicle that the machine's manufacturer/designer is held responsible.

It's a complete con that Tesla is promoting their autonomous driving, but also having their vehicles suddenly switch to non-autonomous driving which they claim moves the responsibility to the human in the driver seat. Presumably, the idea is that the human should have been watching and approving everything that the vehicle has done up to that point.

replies(2): >>41894666 #>>41894794 #
177. ndsipa_pomu ◴[] No.41894354{7}[source]
That's a ridiculous argument.

If a construction firm takes an arbitrary design and then tries to build it in a totally different environment and for a different purpose, then the construction firm is liable, not the original designer. It'd be like Boeing taking a child's paper aeroplane design and making a passenger jet out of it and then blaming the child when it inevitably fails.

replies(3): >>41894574 #>>41894653 #>>41895101 #
178. kergonath ◴[] No.41894366{7}[source]
Someone, at some point signed off on this being released. Not thinking things through seriously is not an excuse to sell defective cars.
179. ndsipa_pomu ◴[] No.41894378{6}[source]
Not trivial, but that is exactly the kind of thing that successful insurance companies factor into their premiums, or specifically exclude those scenarios (e.g. not covering war zones for house insurance).
180. ndsipa_pomu ◴[] No.41894412{3}[source]
That's not a workable idea as it'd just encourage corporations to obfuscate the ownership of the plant (e.g. shell companies) and drastically underestimate the actual risks of catastrophes. Ultimately, the government will be left holding the bill for nuclear catastrophes, so it's better to just recognise that and get the government to regulate the energy companies.
replies(1): >>41894859 #
181. londons_explore ◴[] No.41894414{4}[source]
I suspect the performance might vary widely depending on if you're on a road in california they have a lot of data on, or if its a road FSD has rarely seen before.
182. latexr ◴[] No.41894422{11}[source]
> and I don't think that can apply to a company, because writing is what individuals do.

By that token, no action could ever apply to a company—including approving, producing, or releasing—since it is a legal entity, a concept, not a physical thing. For all those actions there was a person actually doing it in the name of the company.

It’s perfectly normal to say, for example, “GenericCorp wrote a press-release about their new product”.

183. josefx ◴[] No.41894426[source]
> it will be quite good within a year

The regressions are getting worse. For the first release anouncement it was only hitting regulatory hurdles and now the entire software stack is broken? They should fire whoever is in charge and restore the state Elon tried to release a decade ago.

184. londons_explore ◴[] No.41894434{3}[source]
So far, data points to it having far fewer crashes than a human alone. Teslas data shows that, but 3rd party data seems to imply the same.
replies(2): >>41894584 #>>41895126 #
185. pyrale ◴[] No.41894445{5}[source]
> However, the effects of those danger seem much more abstract and delayed, leading people to not be as worried about it.

Climate change is very visible in the present day to me. People are protesting about it frequently enough that it's hard to claim they are not worried.

replies(1): >>41895351 #
186. rvnx ◴[] No.41894536{5}[source]
This is for Autopilot, which is the car following system on highways. If you are in cruise control and staying on your lane, not much is supposed to happen.

The FSD numbers are much more hidden.

The general accident rate is 1 per 400’000 miles driven.

FSD has one “critical disengagement” (aka before accident if human or safety braking doesn’t intervene) every 33 miles driven.

It means to reach unsupervised with human quality they would need to improve it 10’000 times in few months. Not saying it is impossible, just highly optimistic. In 10 years we will be there, but in 2 months, sounds a bit overpromising.

187. ◴[] No.41894574{8}[source]
188. japhyr ◴[] No.41894575[source]
> Just means that when I'm cruising in the right lane with autopilot I have to take over if a car tries to merge.

Which brings it right back to the original criticism of Tesla's "self driving" program. What you're describing is assisted driving, not anything close to "full self driving".

189. rvnx ◴[] No.41894584{4}[source]
It disconnects in case of dangerous situations, so every 33 miles to 77 miles driven (depending on the version), versus 400'000 miles for a human
190. mensetmanusman ◴[] No.41894610{4}[source]
Software requires hardware that can bit flip with gamma rays.
replies(3): >>41894643 #>>41894885 #>>41894887 #
191. LadyCailin ◴[] No.41894616{3}[source]
But people driving manually kill people all the time too. The bar for self driving isn’t «does it never kill anyone», it’s «does it kill people less than manual driving». We’re not there yet, and Tesla’s «FSD» is marketing bullshit, but we certainly will be there one day, and at that point, we need to understand what we as a society will do when a self driving car kills someone. It’s not obvious what the best solution is there, and we need to continue to have societal discussions to hash that out, but the correct solution definitely isn’t «don’t use self driving».
replies(2): >>41894637 #>>41895419 #
192. amelius ◴[] No.41894637{4}[source]
No, because every driver thinks they are better than average.

So nobody will accept it.

replies(3): >>41894755 #>>41894992 #>>41901080 #
193. averageRoyalty ◴[] No.41894644[source]
I'm not disagreeing with your experience. But if it's as bad as you say, why aren't we seeing tens or hundreds of FSD fatalities per day or at least per week? Even if only 1000 people globally have it on, these issues sound like we should be seeing tens per week.
replies(1): >>41896004 #
194. dlisboa ◴[] No.41894646{5}[source]
There is no risk of going to prison. It just doesn’t happen, never have and never will, no matter how unfair that is. Board members and CEOs are not held accountable, ever.
replies(2): >>41894892 #>>41895119 #
195. wongarsu ◴[] No.41894653{8}[source]
Or alternatively, if Boeing uses wood screws to attach an airplane door and the screw fails that's on Boeing, not the airline, pilot or screw manufacturer. But if it's sold as aerospace-grade attachment bolt with attachments for safety wire and a spec sheet that suggests the required loads are within design parameters then it's the bolt manufacturers fault when it fails, and they might have to answer for any deaths resulting from that. Unless Boeing knew or should have known that the bolts weren't actually as good as claimed, then the buck passes back to them

Of course that's wildly oversimplifying and multiple entities can be at fault at once. My point is that these are normal things considered in regular engineering and manufacturing

196. rvnx ◴[] No.41894658{3}[source]
"Just round the corner" (2016)
replies(1): >>41897808 #
197. andrewaylett ◴[] No.41894666{3}[source]
The responsibility doesn't shift, it always lies with the human. One problem is that humans are notoriously poor at maintaining attention when supervising automation

Until the car is ready to take over as legal driver, it's foolish to set the human driver up for failure in the way that Tesla (and the humans driving Tesla cars) do.

replies(2): >>41894801 #>>41896371 #
198. rvnx ◴[] No.41894673{4}[source]
This is Autopilot, not FSD which is an entirely different product
199. Animats ◴[] No.41894709{5}[source]
> and Tesla has accumulated more relevant data than any of its competitors.

Has it really? How much data is each car sending to Tesla HQ? Anybody actually know? That's a lot of cell phone bandwidth to pay for, and a lot of data to digest.

Vast amounts of data about routine driving is not all that useful, anyway. A "highlights reel" of interesting situations is probably more valuable for training. Waymo has shown some highlights reels like that, such as the one were someone in a powered wheelchair is chasing a duck in the middle of a residential street.

replies(1): >>41896324 #
200. renegade-otter ◴[] No.41894710{3}[source]
In the United States? Come on. Boeing executives are not in jail - they are getting bonuses.
replies(1): >>41894852 #
201. heresie-dabord ◴[] No.41894722[source]
> After the system error, I lost all trust in FSD from Tesla.

May I ask how this initial trust was established?

replies(1): >>41896029 #
202. the8472 ◴[] No.41894755{5}[source]
I expect insurance to figure out the relative risks and put a price sticker on that decision.
203. jefftk ◴[] No.41894760[source]
Note that Mercedes does take liability for accidents with their (very limited level) level 3 system: https://www.theverge.com/2023/9/27/23892154/mercedes-benz-dr...
replies(2): >>41894805 #>>41899207 #
204. kingkongjaffa ◴[] No.41894770[source]
> right turns on red

This is a idiosyncrasy of the US (maybe other places too?) and I wonder if it's easier to do self driving at junctions, in countries without this rule.

replies(1): >>41895828 #
205. kalenx ◴[] No.41894784{6}[source]
It is trivial and they've done it for ages. It's called reinsurance.

Basically (_very_ basically, there's more to it) the insurance company insures itself against large claims.

replies(1): >>41895090 #
206. f1shy ◴[] No.41894794{3}[source]
>> When a person drives a vehicle, then the person is held responsible for how the vehicle behaves on the roads, so it's logical that when a machine drives a vehicle that the machine's manufacturer/designer is held responsible.

Never really understood the supposed dilemma. What happens when the brakes fail because of bad quality?

replies(2): >>41894839 #>>41895747 #
207. f1shy ◴[] No.41894801{4}[source]
What?! So if there is a failure and the car goes full throttle (no autonomous car) it is my responsibility?! You are pretty wrong!!!
replies(3): >>41895481 #>>41895527 #>>41906386 #
208. f1shy ◴[] No.41894805{3}[source]
Yes. That is the only way. That being said, I want to see the first incidents, and how are they resolved.
209. ekianjo ◴[] No.41894812{4}[source]
How is that working with Boeing?
replies(1): >>41895001 #
210. f1shy ◴[] No.41894816{7}[source]
Are you serious?! You must be trolling!
replies(1): >>41895151 #
211. ekianjo ◴[] No.41894827{7}[source]
And yet tens of thousands of people die on the roads right now every year. Working well?
212. arzig ◴[] No.41894839{4}[source]
Then this would be manufacturing liability because they are not fit for purpose.
213. f1shy ◴[] No.41894847{8}[source]
It depends. If you do bad sw and skip reviews and processes, you may be liable. Even if you are told to do something, if you know is wrong, you should say it. Right now I’m in middle of s*t because of I spoked up.
replies(1): >>41896160 #
214. A4ET8a8uTh0 ◴[] No.41894849{6}[source]
Pretty much. Fuck. I just watched higher ups sign off on a project I know for a fact has defects all over the place going into production despite our very explicit: don't do it ( not quite Tesla level consequences, but still resulting in real issues for real people ). The sooner we can start having people in jail for knowingly approving half-baked software, the sooner it will improve.
replies(1): >>41895257 #
215. f1shy ◴[] No.41894852{4}[source]
But some little boy down the line will pay for it. Look for Eschede ICE accident.
replies(1): >>41894991 #
216. f1shy ◴[] No.41894859{4}[source]
The problem I see there is that if “corporations are responsible” then no one is. That is, no real person has the responsibility, and acts accordingly.
217. ekianjo ◴[] No.41894860{6}[source]
> make that decision after being informed about most of the known risks

Like for the COVID-19 vaccines? Experimental yet given to billions without ever showing them a consent form.

replies(1): >>41895076 #
218. lotsofpulp ◴[] No.41894884{3}[source]
>cooperation with other drivers is always the right thing to do

Correct, including when the other driver may not have the strictly interpreted legal right of way. You don't know if their vehicle is malfunctioning, or if the driver is malfunctioning, or if they are being overly aggressive or distracted on their phone.

But most of the time, on an onramp to a highway, people on the highway in the lane that is being merged into need to be taking into account the potential conflicts due to people merging in from the acceleration lane. Acceleration lanes can be too short, other cars may not have the capability to accelerate quickly, other drivers may not be as confident, etc.

So while technically, the onus is on people merging in, a more realistic rule is to take turns whenever congestion appears, even if you have right of way.

219. aaronmdjones ◴[] No.41894885{5}[source]
Which is why hardware used to run safety-critical software is made redundant.

Take the Boeing 777 Primary Flight Computer for example. This is a fully digital fly-by-wire aircraft. There are 3 separate racks of equipment housing identical flight computers; 2 in the avionics bay underneath the flight deck, 1 in the aft cargo section. Each flight computer has 3 separate processors, supporting 2 dissimilar instruction set architectures, running the same software built by 3 separate compilers. Each flight computer captures instances of the software not agreeing about an action to be undertaken and wins by majority vote. The processor that makes these decisions is different in each flight computer.

The power systems that provide each flight computer are also fully redundant; each computer gets power from a power supply assembly, which receives 2 power feeds from 3 separate power supplies; no 2 power supply assemblies share the same 2 sources of power. 2 of the 3 power systems (L engine generator, R engine generator, and the hot battery bus) would have to fail and the APU would have to be unavailable in order to knock out 1 of the 3 computers.

This system has never failed in 30 years of service. There's still a primary flight computer disconnect switch on the overhead panel in the cockpit, taking the software out of the loop, to logically connect all of your control inputs to the flight surface actuators. I'm not aware of it ever being used (edit: in a commercial flight).

replies(1): >>41895814 #
220. chgs ◴[] No.41894887{5}[source]
You can control for that. Multiple machines doing is rival calculations for example
221. paulcole ◴[] No.41894891{3}[source]
How you feel while riding isn’t an objective thing. It’s entirely subjective. You and I can sit side by side and feel differently about the same experience.

I don’t see how this is in any way objective besides the fact that you want it to be objective.

You can support things for society that scare you and feel unsafe because you can admit your feelings are subjective and the thing is actually safer than it feels to you personally.

replies(1): >>41896060 #
222. rvnx ◴[] No.41894892{6}[source]
https://fortune.com/2023/01/24/google-meta-spotify-layoffs-c...

As they say, they take "full responsibility"

223. chgs ◴[] No.41894907{5}[source]
Need far more regulation of the software industry, far too many people working in it fail to understand the scope of what they do.

Civil engineer kills someone with a bad building, jail. Surgeon removes the wrong lung, jail. Computer programmer kills someone, “oh well it’s your own fault”.

replies(2): >>41895200 #>>41903272 #
224. chgs ◴[] No.41894921{7}[source]
You have one person in that RACI accountable box. That’s the engineer signing it off as fit. They are held accountable, including with jail if required.
225. lotsofpulp ◴[] No.41894929[source]
I also went into car shopping with that opinion, but the options are bleak in terms of other carmakers' software. For some reason, if you want basic software features of a Tesla, the other carmakers want an extra $20k+ (and still don't have some).

A big example is why do the other carmakers not yet offer camera recording on their cars? They are all using cameras all around, but only Tesla makes it available to you in case you want the footage? Bizarre. And then they want to charge you an extra $500+ for one dash cam on the windshield.

I even had Carplay/Android Auto as a basic requirement, but I was willing to forgo that after trying out the other brands. And not having to spend hours at a dealership doing paperwork was amazing. Literally bought the car on my phone and was out the door within 15 minutes on the day of my appointment.

replies(1): >>41896711 #
226. brightball ◴[] No.41894935{5}[source]
During power outages, having natural gas in your home is a huge benefit. Many in my area just experienced it with Helene.

You can still cook. You can still get hot water. If you have gas logs you still have a heat source in the winter too.

These trade offs are far more important to a lot of people.

replies(1): >>41895342 #
227. herdcall ◴[] No.41894964[source]
Same here, but I tried the new 12.5.4.1 yesterday and the difference is night and day. It was near flawless except for some unexplained slowdowns and you don't even need to hold the steering anymore (it detects attention by looking at your face), they clearly are improving rapidly.
replies(1): >>41895025 #
228. renegade-otter ◴[] No.41894991{5}[source]
There are many examples.

The Koch brothers, famous "anti-regulatory state" warriors, have fought oversight so hard that their gas pipelines were allowed to be barely intact.

Two teens get into a truck, turn the ignition key - and the air explodes:

https://www.southcoasttoday.com/story/news/nation-world/1996...

Does anyone go to jail? F*K NO.

replies(1): >>41895304 #
229. A4ET8a8uTh0 ◴[] No.41894992{5}[source]
Assuming I understand the argument flow correctly, I think I disagree. If there is one thing that the past few decades have confirmed quite conclusively, it is that people will trade a lot of control and sense away in the name of convenience. The moment FSD reaches that sweet spot of 'take me home -- I am too drunk to drive' of reliability, I think it would be accepted; maybe even required by law. It does not seem there.
230. mlinhares ◴[] No.41895001{5}[source]
People often forget corporations don’t go to jail. Murder when you’re not a person ends up with a slap.
231. lolinder ◴[] No.41895025[source]
How many miles have you driven since the update yesterday? OP described a half dozen different failure modes in a variety of situations that seem to indicate quite extensive testing before they turned it off. How far did you drive the new version and in what circumstances?
replies(1): >>41895241 #
232. ywvcbk ◴[] No.41895047{7}[source]
From a utilitarian perspective sure, you might be right but how do you exempt those companies from civil liability and make it impossible for victims/their families to sue the manufacturer? Might be legally tricky (driver/owner can explicitly/implicitly agree with the EULA or other agreements, imposing that on third parties wouldn’t be right).
replies(1): >>41895366 #
233. lolinder ◴[] No.41895052{3}[source]
I was taught that in every situation you should act as though you are the sole person responsible for making the interaction safe.

If you're the one merging? It's on you. If you're the one being merged into? Also you.

If you assume that every other driver has a malfunctioning vehicle or is driving irresponsibly then your odds of a crash go way down because you assume that they're going to try to merge incorrectly.

234. ywvcbk ◴[] No.41895076{7}[source]
Yes, but worse. Nobody physically forced anyone to get vaccinated so you still had some choice. Of course legally banning individuals from using public roads or sidewalks unless they give up their right to sue Tesla/etc. might be an option.
235. ywvcbk ◴[] No.41895090{7}[source]
I’m not sure Boeing etc. could have insured any liability risk resulting from engineering/design flaws in their vehicles?
236. bossyTeacher ◴[] No.41895100{4}[source]
Doesn't seem to happen in the medical and airplane industries, otherwise, Boeing would most likely not exist as a company anymore.
replies(1): >>41895177 #
237. krisoft ◴[] No.41895101{8}[source]
> That's a ridiculous argument.

Not making an argument. Asking a clarifying question about someone else’s.

> It'd be like Boeing taking a child's paper aeroplane design and making a passenger jet out of it and then blaming the child when it inevitably fails.

Yes exactly. You are using the same example I used to say the same thing. So which part of my message was ridiculous?

replies(1): >>41895440 #
238. llamaimperative ◴[] No.41895109{3}[source]
More Americans should go drive on the Autobahn. Everyone thinks the magic is “omg no speed limits!” which is neat but the really amazing thing is that NO ONE sits in the left hand lane and EVERYONE will let you merge immediately upon signaling.

It’s like a children’s book explanation of the nice things you can have (no speed limits) if everyone could just stop being such obscenely selfish people (like sitting in the left lane or preventing merges because of some weird “I need my car to be in front of their car” fixation).

replies(1): >>41895129 #
239. llamaimperative ◴[] No.41895119{6}[source]
https://www.justice.gov/opa/pr/former-enron-ceo-jeffrey-skil...
240. macNchz ◴[] No.41895127{3}[source]
At least in the northeast/east coast US there are still lots of old parkways without modern onramps, where moving over to let people merge is super helpful. Frequently these have bad visibility and limited room to accelerate if any at all, so doing it your way is not really possible.

For example:

I use this onramp fairly frequently. It’s rural and rarely has much traffic, but when there is you can get stuck for a while trying to get on because it’s hard to see the coming cars, and there’s not much room to accelerate (unless people move over, which they often do). https://maps.app.goo.gl/ALt8UmJDzvn89uvM7?g_st=ic

Preemptively getting in the left lane before going under this bridge is a defensive safety maneuver I always make—being in the right lane nearly guarantees some amount of conflict with merging traffic.

https://maps.app.goo.gl/PumaSM9Bx8iyaH9n6?g_st=ic

241. llamaimperative ◴[] No.41895126{4}[source]
Tesla does not release the data required to substantiate such a claim. It simply doesn’t and you’re either lying or being lied to.
replies(1): >>41895194 #
242. rvnx ◴[] No.41895129{4}[source]
Tesla FSD on German Autobahn = most dangerous thing ever. The car has never seen this rule and it's not ready for a 300km/h car behind you.
replies(1): >>41902386 #
243. kybernetikos ◴[] No.41895142[source]
> But at the end of the day, only the numbers matter.

Are these the numbers reported by tesla, or by some third party?

244. anonu ◴[] No.41895150[source]
My experience has been directionally the same as yours but not of the same magnitude. There's a lot of room from improvement but it's still very good. I'm in a slightly suburban setting... I suspect you're in a fender denser location that me, in which case your experience may be different.
replies(1): >>41895212 #
245. krisoft ◴[] No.41895151{8}[source]
I assure you I am not trolling. You appear to have misread my message.

Take a deep breath. Read my message one more time carefully. Notice the question mark at the end of the last sentence. Think about it. If after that you still think I’m trolling you or anyone else I will be here and happy to respond to your further questions.

246. llamaimperative ◴[] No.41895153{5}[source]
The crux of the issue is that your interpretation of performance cannot be trusted. It is absolutely irrelevant.

Even a system that is 99% reliable will honestly feel very, very good to an individual operator, but would result in huge loss of life when scaled up.

Tesla can earn more trust be releasing the data necessary to evaluate the system’s performance. The fact that they do not is far more informative than a bunch of commentators saying “hey it’s better than it was last month!” for the last several years — even if it is true that it’s getting better and even if it’s true it’s hypothetically possible to get to the finish line.

247. llamaimperative ◴[] No.41895165{4}[source]
Very strange not wanting poorly controlled 4,000lb steel cages driving around at 70mph stewarded by people calling “only had to stop it from killing me 4 times today!” as great success.
248. jsvlrtmred ◴[] No.41895177{5}[source]
Perhaps one can debate whether it happens often enough or severely enough, but it certainly happens. For example, and only the first one to come to mind - the president of PIP went to jail.
249. londons_explore ◴[] No.41895194{5}[source]
tesla releases this data: https://www.tesla.com/VehicleSafetyReport
replies(3): >>41895375 #>>41896186 #>>41897290 #
250. caddemon ◴[] No.41895200{6}[source]
I've never heard of a surgeon going to jail over a genuine mistake even if it did kill someone. I'm also not sure what that would accomplish - take away their license to practice medicine sure, but they're not a threat to society more broadly.
251. amelius ◴[] No.41895212[source]
Their irresponsible behavior says enough. Even if they fix all their technical issues, they are not driven by a safety culture.

The first question that comes to their minds is not "how can we prevent this accident?" but it's "how can we further inflate this bubble?"

252. AndroidKitKat ◴[] No.41895241{3}[source]
I recently took a 3000 mile road trip on 12.5.4.1 on a mix of interstate, country roads, and city streets and there were only a small handful of instances where I felt like FSD completely failed. It's certainly not perfect, but I have never had the same failures that the original thread poster had.
253. IX-103 ◴[] No.41895257{7}[source]
Should we require Professional Engineers to sign off on such projects the same way they are required to for other safety critical infrastructure (like bridges and dams)? The Professional Engineer that signed off is liable for defects in the design. (Though, of course, if the design is not followed then liability can shift back to the company that built it)
replies(1): >>41898367 #
254. ◴[] No.41895291[source]
255. rainsford ◴[] No.41895301[source]
Arguably the problem with Tesla self-driving is that it's stuck in an uncanny valley of performance where it's worse than better performing systems but also worse from a user experience perspective than even less capable systems.

Less capable driver assistance type systems might help the driver out (e.g. adaptive cruise control), but leave no doubt that the human is still driving. Tesla though goes far enough that it takes over driving from the human but it isn't reliable enough that the human can stop paying attention and be ready to take over at a moment's notice. This seems like the worst of all possible worlds since you are both disengaged by having to maintain alertness.

Autopilots in airplanes are much the same way, pilots can't just turn it on and take a nap. But the difference is that nothing an autopilot is going to do will instantly crash the plane, while Tesla screwing up will require split second reactions from the driver to correct for.

I feel like the real answer to your question is that having reasonable confidence in self-driving cars beyond "driver assistance" type features will ultimately require a car that will literally get from A to B reliably even if you're taking a nap. Anything close to that but not quite there is in my mind almost worse than something more basic.

256. IX-103 ◴[] No.41895304{6}[source]
To be fair, the teens knew about the gas leak and started the truck in an attempt to get away. Gas leaks like that shouldn't happen easily, but people near pipelines like that should also be made aware of the risks of gas leaks, as some leaks are inevitable.
replies(1): >>41897806 #
257. Peanuts99 ◴[] No.41895317{3}[source]
If this is what society has to pay to improve Tesla's product, then perhaps they should have to share the software with other car manufacturers too.

Otherwise every car brand will have to kill a whole heap of people too until they manage to make a FSD system.

replies(1): >>41896045 #
258. rainsford ◴[] No.41895325{3}[source]
> You need to get your speed and position right, and if you can't do that, you don't merge.

I agree, but my observation has been that the majority of drivers are absolutely trash at doing that and I'd rather they not crash into me, even if would be their fault.

Honestly I think Tesla's self-driving technology is long on marketing and short on performance, but it really helps their case that a lot of the competition is human drivers who are completely terrible at the job.

259. moooo99 ◴[] No.41895342{6}[source]
Granted, that is a valid concern if power outages are more frequent in your area. I have never experienced a power outage personally, so that is nothing I ever thought of. However, I feel like with solar power and battery storage systems becoming increasingly widespread, this won't be a major concern for much longer
replies(1): >>41898444 #
260. moooo99 ◴[] No.41895351{6}[source]
Climate change is certainly visible, although the extend to which areas are affected varies wildly. However, there are still shockingly many people who have a hard time attributing ever increasing natural disasters and more extreme weather patterns to climate change.
261. Majromax ◴[] No.41895366{8}[source]
> how do you exempt those companies from civil liability and make it impossible for victims/their families to sue the manufacturer?

I don't think anyone in this thread has talked about an exemption from civil liability (sue for money), just criminal liability (go to jail).

Civil liability is the far less controversial issue because it's transferred all the time: governments even mandate that drivers carry insurance for this purpose.

With civil liability transfer, imperfect FSD can still make economic sense. Just as an insurance company needs to collect enough premium to pay claims, the FSD manufacturer would need to reserve enough revenue to pay its expected claims. In this case, FSD doesn't even need to be better than humans to make economic sense, in the same way that bad drivers can still buy (expensive) insurance.

replies(2): >>41895467 #>>41895767 #
262. rainsford ◴[] No.41895375{6}[source]
That data is not an apples to apples comparison unless autopilot is used in exactly the same mix of conditions as human driving. Tesla doesn't share that in the report, but I'd bet it's not equivalent. I personally tend to turn on driving automation features (in my non-Tesla car) in easier conditions and drive myself when anything unusual or complicated is going on, and I'd bet most drivers of Teslas and otherwise do the same.

This is important because I'd bet similar data on the use of standard, non-adaptive cruise control would similarly show it's much safer than human drivers. But of course that would be because people use cruise control most in long-distance highway driving outside of congested areas, where you're least likely to have an accident.

263. Majromax ◴[] No.41895419{4}[source]
> The bar for self driving isn’t «does it never kill anyone», it’s «does it kill people less than manual driving».

Socially, that's not quite the standard. As a society, we're at ease with auto fatalities because there's often Someone To Blame. "Alcohol was involved in the incident," a report might say, and we're more comfortable even though nobody's been brought back to life. Alternatively, "he was asking for it, walking at night in dark clothing, nobody could have seen him."

This is an emotional standard that speaks to us as human, story-telling creatures that look for order in the universe, but this is not a proper actuarial standard. We might need FSD to be manifestly safer than even the best human drivers before we're comfortable with its universal use.

replies(1): >>41901703 #
264. ndsipa_pomu ◴[] No.41895440{9}[source]
If it's not an argument, then you're just misrepresenting your parent poster's comment by introducing a scenario that never happens.

If you didn't intend your comment as a criticism, then you phrased it poorly. Do you actually believe that your scenario happens in reality?

replies(2): >>41895781 #>>41897990 #
265. everforward ◴[] No.41895466{8}[source]
You could be held liable if it impacts someone else. A restaurant serving improperly cooked chicken that gives people E Coli is liable. Private citizens may not have that duty, I’m not sure.

You would likely also be liable if you overloaded an electrical cable, causing a fire that killed someone.

“Using it in the way it was intended” is largely circular reasoning; of course it wasn’t intended to hurt anyone, so any usage that does hurt someone was clearly unintended. People frequently harm each other by misusing items in ways they didn’t realize were misuses.

> This is untenable. Does nobody want a formally verified avionics system in their airliner, either?

Not for the price it would cost. Airbus is the pioneer here, and even they apply formal verification sparingly. Here’s a paper from a few years ago about it, and how it’s untenable to formally verify the whole thing: https://www.di.ens.fr/~delmas/papers/fm09.pdf

Software development effort generally tends to scale superlinearly with complexity. I am not an expert, but the impression I get is that formal verification grows exponentially with complexity to the point that it is untenable for most things beyond research and fairly simple problems. It is a huge pain in the ass to do something like putting time bounds around reading a config file.

IO also sucks in formal verification from what I hear, and that’s like 80% of what a plane does. Read these 300 signals, do some standard math, output new signals to controls.

These things are much easier to do with tests, but tests only check for scenarios you’ve thought of already

replies(1): >>41898516 #
266. ywvcbk ◴[] No.41895467{9}[source]
> just criminal liability (go to jail).

That just seems like a theoretical possibility (even if that). I don’t see how any engineer or even someone in management could go to jail unless intent or gross negligence can be proven.

> drivers carry insurance for this purpose.

The mandatory limit is extremely low in many US states.

> expected claims

That seems like the problem. It might take a while until we reach an equilibrium of some sort.

> that bad drivers can still buy

That’s still capped by the amount of coverage + total assets held by that bad driver. In Tesl’s case there is no real limit (without legislation/established precedent). Juries/courts would likely be influenced by that fact as well.

267. kgermino ◴[] No.41895481{5}[source]
You are responsible (Legally, contractually, morally) for supervising FSD today. If the car decided to stomp on the throttle you are expected to be ready to hit the brakes.

The whole point is that is somewhat of an unreasonable expectation but it’s what Tesla expects you to do today

replies(2): >>41896164 #>>41896283 #
268. the8472 ◴[] No.41895493{3}[source]
We also pay this price with every new human driver we train. again and again.
replies(1): >>41898694 #
269. xondono ◴[] No.41895527{5}[source]
Autopilot, FSD, etc.. are all legally classified as ADAS, so it’s different from e.g. your car not responding to controls.

The liability lies with the driver, and all Tesla needs to prove is that input from the driver will override any decision made by the ADAS.

270. renewiltord ◴[] No.41895652{4}[source]
Sure, we can have a carbon tax on everything. That's fine. And then the nuclear plant has to pay for a Pripyat-sized exclusion zone around it. Just like the guy said about Tesla. All fair.
271. hibikir ◴[] No.41895710{4}[source]
Remember that this is neural networks doing the driving, more than old expert systems: What makes a crash happen is a network that fails to read an image correctly, or a network that fails to capture what is going on when melding input from different sensors.

So the blame won't be on a guy who got an if statement backwards, but signing off on stopping training, failing to have certain kinds of pictures in the set, or other similar, higher order problem. Blame will be incredibly nebulous.

replies(1): >>41902224 #
272. ndsipa_pomu ◴[] No.41895747{4}[source]
> What happens when the brakes fail because of bad quality?

Depends on the root cause of the failure. Manufacturing faults would put the liability on the manufacturer; installation mistakes would put the liability on the mechanic; using them past their useful life would put the liability on the owner for not maintaining them in working order.

273. DennisP ◴[] No.41895767{9}[source]
In fact, if you buy your insurance from Tesla, you effectively do put civil responsibility for FSD back in their hands.
274. lcnPylGDnU4H9OF ◴[] No.41895781{10}[source]
It was not a misrepresentation of anything. They were just restating the worry that was stated in the GP comment. https://news.ycombinator.com/item?id=41892572

And the only reason the commenter I linked to had that response is because its parent comment was slightly careless in its phrasing. Probably just change “write” to “deploy” to capture the intended meaning.

275. mensetmanusman ◴[] No.41895814{6}[source]
You can’t guarantee the hardware was properly built.
replies(1): >>41895873 #
276. dboreham ◴[] No.41895828[source]
Only some states allow turn on red, and it's also often overridden by a road sign that forbids. But for me the ultimate test of AGI is four-or-perhaps-three-or-perhaps-two way stop intersections. You have to know whether the other drivers have a stop sign or not in order to understand how to proceed, and you can't see that information. As an immigrant to the US this baffles me, but my US-native family members shrug like there's some telepathy way to know. There's also a rule that you yield to vehicles on your right at uncontrolled intersections (if you can determine that it is uncontrolled...) that almost no drivers here seem to have heard of. You have to eye-ball the other driver to determine whether or not they look like they remember road rules. Not sure how a Tesla will do that.
replies(1): >>41896732 #
277. sigh_again ◴[] No.41895839{5}[source]
>Software I write shouldn't be relied on in critical situations.

Then don't write software to be used in things that are literally always critical situations, like cars.

278. KaiserPro ◴[] No.41895855{5}[source]
Tesla's sensor suite does not support safe FSD.

It relies on inferred depth from a single point of view. This means that the depth/positioning info for the entire world is noisy.

From a safety critical point of view its also bollocks, because a single birdshit/smear/raindrop/oil can render the entire system inoperable. Does it degrade safely? does it fuck.

> recognizes that training exceptional deep neural networks requires vast amounts of data,

You missed good data. Recording generic driver's journeys isn't going to yield good data, especially if the people who are driving aren't very good. You need to have a bunch of decent drivers doing specific scenarios.

Moreover that data isn't easily generalisable to other sensor suites. Add another camera? yeahna, new model.

> Tesla recently held a robotaxi event, explicitly informing investors of their plans

When has Musk ever delivered on time?

> his ability to achieve results

most of those results aren't that great. Tesla isn't growing anymore, its reliant on state subsidies to be profitable. They still only ship 400k units a quarter, which is tiny compared to VW's 2.2million.

> attract top engineering and management talent is undeniable

Most of the decent computer vision people are not in tesla. Hardware wise, their factories aren't fun places to be. He's a dick to work for, capricious and vindictive.

279. aaronmdjones ◴[] No.41895873{7}[source]
Unless Intel, Motorola, and AMD all conspire to give you a faulty processor, you will get a working primary flight computer.

Besides, this is what flight testing is for. Aviation certification authorities don't let an aircraft serve passengers unless you can demonstrate that all of its safety-critical systems work properly and that it performs as described.

I find it hard to believe that automotive works much differently in this regard, which is what things like crumple zone crash tests are for.

280. modeless ◴[] No.41895883{3}[source]
The timing of the rate of improvement increasing corresponds with finishing their switch to end-to-end machine learning. ML does have scaling laws actually.

Tesla collects their own data, builds their own training clusters with both Nvidia hardware and their own custom hardware, and deploys their own custom inference hardware in the cars. There is no obstacle to them scaling up massively in all dimensions, which basically guarantees significant progress. Obviously you can disagree about whether that progress will be enough, but based on the evidence I see from using it, I think it will be.

281. bastawhiz ◴[] No.41896004[source]
Perhaps having more accidents doesn't mean more fatal accidents.
282. bastawhiz ◴[] No.41896029[source]
The numbers that are reported aren't abysmal, and people have anecdotally said good things. I was willing to give it a try while being hyper vigilant.
283. modeless ◴[] No.41896045{4}[source]
Elon has said many times that they are willing to license FSD but nobody else has been interested so far. Clearly that will change if they reach their goals.

Also, "years of death and injury" is a bald-faced lie. NHTSA would have shut down FSD a long time ago if it were happening. The statistics Tesla has released to the public are lacking, it's true, but they cannot hide things from the NHTSA. FSD has been on the road for years and a billion miles and if it was overall significantly worse than normal driving (when supervised, of course) the NHTSA would know by now.

The current investigation is about performance under specific conditions, and it's possible that improvement is possible and necessary. But overall crash rates have not reflected any significant extra danger by public use of FSD even in its primitive and flawed form of earlier this year and before.

284. bastawhiz ◴[] No.41896060{4}[source]
I also did write about times when the car would have damaged itself or likely caused an accident, and those are indeed objective problems.
replies(1): >>41896487 #
285. modeless ◴[] No.41896091{6}[source]
I have consistently been critical of Musk for this over the many years it's been happening. Even right now, I don't believe FSD will be unsupervised next year like he just claimed. And yet, I can see the real progress and I am convinced that while it won't be next year, it could absolutely happen within two or three years.

One of these years, he is going to be right. And at that point, the fact that he was wrong for a long time won't diminish their achievement. As he likes to say, he specializes in transforming technology from "impossible" to "late".

> I'm not convinced that autonomous driving that only makes use of cameras will ever be reliably safer than human drivers.

Believing this means that you believe AIs will never match or surpass the human brain. Which I think is a much less common view today than it was a few years ago. Personally I think it is obviously wrong. And also I don't believe surpassing the human brain in every respect will be necessary to beat humans in driving safety. Unsupervised FSD will come before AGI.

286. Filligree ◴[] No.41896160{9}[source]
> Right now I’m in middle of s*t because of I spoked up.

And you believe that, despite experiencing what happens if you speak up?

We shouldn’t simultaneously require people to take heroic responsibility, while also leaving them high and dry if they do.

replies(1): >>41896521 #
287. f1shy ◴[] No.41896164{6}[source]
My example was clear about NOT about autonomous driving. Because the previous comment seems to imply for everything you are responsible
288. theptip ◴[] No.41896173[source]
Presumably that is exactly when their taxi service rolls out?

While this has a dramatic rhetorical flourish, I don’t think it’s a good proxy. Even if it was safer, it would be an unnecessarily high burden to clear. You’d be effectively writing a free insurance policy which is obviously not free.

Just look at total accidents / deaths per mile driven, it’s the obvious and standard metric for measuring car safety. (You need to be careful to not stop the clock as soon as the system disengages of course. )

289. llamaimperative ◴[] No.41896186{6}[source]
Per the other comment: no, they don't. This data is not enough to evaluate its safety. This is enough data to mislead people who spend <30 seconds thinking about the question though, so I guess that's something (something == misdirection and dishonesty).

You've been lied to.

290. IX-103 ◴[] No.41896217{7}[source]
Waymo is using full lidar and other sensors, whereas Tesla is relying on pure vision systems (to the point of removing radar on newer models). So they're solving a much harder problem.

As for whether it's worthwhile to solve that problem when having more sensors will always be safer, that's another issue...

replies(1): >>41896547 #
291. FireBeyond ◴[] No.41896283{6}[source]
> If the car decided to stomp on the throttle you are expected to be ready to hit the brakes.

Didn't Tesla have an issue a couple of years ago where pressing the brake did not disengage any throttle? i.e. if the car has a bug and puts throttle to 100% and you stand on the brake, the car should say "cut throttle to 0", but instead, you just had 100% throttle, 100% brake?

replies(1): >>41897359 #
292. jeffbee ◴[] No.41896324{6}[source]
Anyone who believes Tesla beats Google because they are better at collecting and handling data can be safely ignored.
replies(1): >>41900783 #
293. mannykannot ◴[] No.41896371{4}[source]
> The responsibility doesn't shift, it always lies with the human.

Indeed, and that goes for the person or persons who say that the products they sell are safe when used in a certain way.

294. paulcole ◴[] No.41896487{5}[source]
> It failed with a cryptic system error while driving

I’ll give you this one.

> In my opinion, the default setting accelerates way too aggressively. I'd call myself a fairly aggressive driver and it is too aggressive for my taste

Subjective.

> It started making a left turn far too early that would have scraped the left side of the car on a sign. I had to manually intervene.

Since you intervened and don’t know what would’ve happened, subjective.

> It tried to make way too many right turns on red when it wasn't safe to. It would creep into the road, almost into the path of oncoming vehicles

Subjective.

> It would switch lanes to go faster on the highway, but then missed an exit on at least one occasion because it couldn't make it back into the right lane in time. Stupid.

Objective.

You’ve got some fair complaints but the idea that feeling safe is what’s needed remains subjective.

295. kylecordes ◴[] No.41896509{3}[source]
On one hand, it really has gotten much better over time. It's quite impressive.

On the other hand, I fear/suspect it is asymptotically, rather than linearly, approaching good enough to be unsupervised. It might get halfway there, each year, forever.

296. f1shy ◴[] No.41896521{10}[source]
I do believe I am responsible. I recognize I’am now in a position that I can speak without fear. If I get fired I would make a party tbh.
297. ben_w ◴[] No.41896547{8}[source]
Indeed.

While it ought to be possible to solve for just RGB… making it needlessly hard for yourself is a fun hack-day side project, not a valuable business solution.

298. bink ◴[] No.41896711{3}[source]
Rivian also allows recording drives to an SSD. They also just released a feature where you can view the cameras while it's parked. I'm kinda surprised other manufacturers aren't allowing that.
replies(1): >>41896762 #
299. bink ◴[] No.41896732{3}[source]
If it's all-way stop there will often be a small placard below the stop sign. If there's no placard there then (usually) cross traffic doesn't stop. Sometimes there's a placard that says "two-way" stop or one that says "cross traffic does not stop", but that's not as common in my experience.
300. lotsofpulp ◴[] No.41896762{4}[source]
Rivians start at $30k more than Teslas, and while they may be nice, they don’t have the track record yet that Tesla does, and there is a risk the company goes bust since it is currently losing a lot of money.
301. sashank_1509 ◴[] No.41896899{4}[source]
Do we send Boeing engineers to jail when their plane crashes?

Intention matters when passing crime judgement. If a mother causes the death of her baby due to some poor decision (say feed her something contaminated), no one proposes or tries to jail the mother, because they know the intention was the opposite.

replies(1): >>41901773 #
302. lowbloodsugar ◴[] No.41896926{3}[source]
And corporations are people now, so Tesla can go to jail.
303. phito ◴[] No.41897127[source]
I hope I never get to share road with you. Oh wait I won't, this crazyness is illegal here.
304. 7sidedmarble ◴[] No.41897174{7}[source]
I don't think anyone's seriously suggesting people be held accountable for bugs which are ultimately accidents. But if you knowingly sign off on, oversea, or are otherwise directly responsible for the construction of software that you know has a good chance of killing people, then yes, there should be consequences for that.
replies(1): >>41903564 #
305. FireBeyond ◴[] No.41897290{6}[source]
No, it releases enough data to actively mislead you (because there is no way Tesla's data people are unaware of these factors):

The report measures accidents in FSD mode. Qualifiers to FSD mode: the conditions, weather, road, location, traffic all have to meet a certain quality threshold before the system will be enabled (or not disable itself). Compare Sunnyvale on a clear spring day to Pittsburgh December nights.

There's no qualifier to the "comparison": all drivers, all conditions, all weather, all roads, all location, all traffic.

It's not remotely comparable, and Tesla's data people are not that stupid, so it's willfully misleading.

This report does not include fatalities. It also doesn't consider any incident where there was not airbag deployment to be an accident. Sounds potentially reasonable until you consider:

- first gen airbag systems were primitive: collision exceeds threshold, deploy. Currently, vehicle safety systems consider duration of impact, speeds, G-forces, amount of intrusion, angle of collision, and a multitude of other factors before deciding what, if any, systems to fire (seatbelt tensioners, airbags, etc.) So hit something at 30mph with the right variables? Tesla: "this is not an accident".

- Tesla also does not consider "incident was so catastrophic that airbags COULD NOT deploy*" to be an accident, because "airbags didn't deploy". This umbrella could also include egregious, "systems failed to deploy for any reason up to and including poor assembly line quality control", as also not an accident and also "not counted".

306. blackeyeblitzar ◴[] No.41897359{7}[source]
If it did, it wouldn’t matter. Brakes are required to be stronger than engines.
replies(1): >>41897796 #
307. FireBeyond ◴[] No.41897796{8}[source]
That makes no sense. Yes, they are. But brakes are going to be more reactive and performant with the throttle at 0 than 100.

You can't imagine that the stopping distances will be the same.

308. 8note ◴[] No.41897806{7}[source]
As an alternative though, the company also failed at handling that the gas leak started. They could have had people all over the place guiding people out and away from the leak safely, and keeping the public away while the leak is fixed.

Or, they could buy sufficient buffer land around the pipeline such that the gas leak will be found and stopped before it could explode down the road

309. FireBeyond ◴[] No.41897808{4}[source]
Musk in 2016 (these are quotes, not paraphrases): "Self driving is a solved problem. We are just tuning the details."

Musk in 2021: "Right now our highest priority is working on solving the problem."

310. FireBeyond ◴[] No.41897823{3}[source]
I would love self-driving to succeed. I should be a Tesla fan, because I'm very much a fan of geekery and tech anywhere and everywhere.

But no. I want self-driving to succeed, and when it does (which I don't think is that soon, because the last 10% takes 90% of the time), I don't think Tesla or their approach will be the "winner".

311. krisoft ◴[] No.41897990{10}[source]
> you're just misrepresenting your parent poster's comment

I did not represent or misrepresent anything. I have asked a question to better understand their thinking.

> If you didn't intend your comment as a criticism, then you phrased it poorly.

Quite probably. I will have to meditate on it.

> Do you actually believe that your scenario happens in reality?

With railway bridges? Never. It would ring alarm bells for everyone from the fabricators to the locomotive engineer.

With software? All the time. Someone publishes some open source code, someone else at a corporation bolts the open source code into some application and now the former “toy train bridge” is a loadbearing key-component of something the original developer could never imagine nor plan for.

This is not theoretical. Very often I’m the one doing the bolting.

And to be clear: my opinion is that the liability should fall with whoever integrated the code and certified it to be fit for some safety critical purpose. As an example if you publish leftpad and i put it into a train brake controller it is my job to make sure it is doing the right thing. If the train crashes you as the author of leftpad bear no responsibility but me as the manufacturer of discount train brakes do.

312. eric_cc ◴[] No.41898215{5}[source]
Open discussion and sharing different experiences with technology is “annoying noise” to you but not to me. Slamming technology that works great for others should receive no counter points and become an echo chamber or what?
313. tensor ◴[] No.41898339{7}[source]
It sounds like you are the one with a deathwish, because objectively by the numbers Autopilot on the highway has greatly reduced death. So you are literally advocating for more death.

You have two imperfect systems for highway driving: Autopilot with human oversight, and humans. The first has far far less death. Yet you are choosing the second.

314. A4ET8a8uTh0 ◴[] No.41898367{8}[source]
I hesitate, because I shudder at government deciding which algorithm is best for a given scenario ( because that is effectively is where it would go ). Maybe the distinction is, the moment money changes hands based on product?

I am not an engineer, but I have watched clearly bad decisions take place from technical perspective so that a person with title that went to their head and a bonus that is not aligned with right incentives mess things up for us. Maybe some proffesionalization of software engineering is in order.

replies(1): >>41902255 #
315. brightball ◴[] No.41898444{7}[source]
They aren’t frequent but in the last 15-16 years there have been 2 outages that lasted almost 2 weeks in some areas around here. The first one was in the winter and the only gas appliance I had was a set of gas logs in the den.

It heated my whole house and we used a pan to cook over it. When we moved the first thing I did was install gas logs, gas stove and a gas water heater.

It’s nice to have options and backup plans. That’s one of the reasons I was a huge fan of the Chevy Volt when it first came out. I could easily take it on a long trip but still averaged 130mpg over 3 years (twice). Now I’ve got a Tesla and when there are fuel shortages it’s also really nice.

A friend of ours owns a cybertruck and was without power for 9 days, but just powered the whole house with the cybertruck. Every couple of days he’d drive to a supercharger station to recharge.

316. potato3732842 ◴[] No.41898481{5}[source]
Oh, I'm well aware how things work.

But we should look down on them and speak poorly of them same as we look down on and speak poorly of everyone else who's discourteous in public spaces.

317. kergonath ◴[] No.41898516{9}[source]
> You could be held liable if it impacts someone else. A restaurant serving improperly cooked chicken that gives people E Coli is liable. Private citizens may not have that duty, I’m not sure. > You would likely also be liable if you overloaded an electrical cable, causing a fire that killed someone.

Right. But neither of these examples are following guidelines or proper use. If I turn the car into people on the pavement, I am responsible. If the steering wheel breaks and the car does it, then the manufacturer is responsible (or the mechanic, if the steering wheel was changed). The question at hand is whose responsibility it is if the car’s software does it.

> “Using it in the way it was intended” is largely circular reasoning; of course it wasn’t intended to hurt anyone, so any usage that does hurt someone was clearly unintended.

This is puzzling. You seem to be conflating use and consequences and I am not quite sure how you read that in what I wrote. Using a device normally should not make it kill people, I guess at least we can agree on that. Therefore, if a device kills people, then it is either improper use (and the fault of the user), or a defective device, at which point it is the fault of the designer or manufacturer (or whoever did the maintenance, as the case might be, but that’s irrelevant in this case).

Each device has a manual and a bunch of regulations about its expected behaviour and standard operating procedures. There is nothing circular about it.

> Not for the price it would cost.

Ok, if you want to go full pedantic, note that I wrote “want”, not “expect”.

318. dham ◴[] No.41898653[source]
Autopilot is just adaptive cruise control with lane keep. Literally every car has this now. I don't see people on Toyota, Honda, or Ford forums complaining that a table-stakes feature of a car doesn't adjust speed or change lanes as a car is merging in. Do you know how insane that sounds. I'm assuming you're in software since you're on Hacker news.
replies(2): >>41899897 #>>41901125 #
319. dham ◴[] No.41898678{4}[source]
A lot of haters mistake safety critical disengagements with "oh the car is doing something I don't like or I wouldn't do"

If you treat the car like it's a student driver or someone else driving, disengagements will go do. If you treat it like you're driving there's also something to complain about.

320. dham ◴[] No.41898694{4}[source]
You won't be able to bring logic to people with Elon derangement syndrome.
321. iknowstuff ◴[] No.41899207{3}[source]
its pathetic. <40mph following a vehicle directly ahead. basically only usable in stop and go traffic

https://www.notebookcheck.net/Tesla-vs-Mercedes-self-driving...

replies(1): >>41899255 #
322. jefftk ◴[] No.41899255{4}[source]
The Mercedes system is definitely, as I said, very limited. But within it's operating conditions the Mercedes system is much more useful: you can safely and legally read, work, or watch a movie while in the driver's seat, literally not paying any attention to the road.
323. twoWhlsGud ◴[] No.41899897{3}[source]
My Audi doesn't advertise its predictive cruise control as Full Self Driving. So expectations are more controlled...
replies(1): >>41901111 #
324. Der_Einzige ◴[] No.41900222{3}[source]
Thank god someone else said it.

I want some of these tesla bulls to PROVE that they are actually "not intervening". I think the one's who claim they aren't doing things for hours are liars.

replies(1): >>41904056 #
325. ethbr1 ◴[] No.41900783{7}[source]
The argument wouldn't be "better at" but simply "more".

Sensor platforms deployed at scale, that you have the right to take data from, are difficult to replicate.

replies(1): >>41905959 #
326. Dylan16807 ◴[] No.41901038{6}[source]
That's liability for defective design, not any time it fails as suggested above.
327. Dylan16807 ◴[] No.41901080{5}[source]
The level where someone personally uses it and the level where they accept it being on the road are different. Beating the average driver is all about the latter.

Also I will happily use self driving that matches the median driver in safety.

328. Dylan16807 ◴[] No.41901111{4}[source]
They're not talking about FSD.
329. Dylan16807 ◴[] No.41901125{3}[source]
It sounds zero insane. Adaptive cruise control taking into account merging would be great. And it's valid to complain about automations that make your car worse at cooperating.
replies(1): >>41903728 #
330. valval ◴[] No.41901681{5}[source]
Like I said, the time for studies is in the future. FSD is a product in development and they know which stats they need to track in order to track progress.

You’re arguing for something that: 1. Isn’t under contention and 2. Isn’t rooted in the real world.

You’re right FSD isn’t an autonomous driving system. It’s not meant to be, right now.

replies(1): >>41905612 #
331. LadyCailin ◴[] No.41901703{5}[source]
That may be true, but I think I personally would find it extremely hard to argue against when the numbers are clearly showing that it’s safer. I think once the numbers are unambiguously showing that autopilots are safer, it will be super hard for people to argue against it. Of course there is a huge intermediate state where the numbers aren’t clear (or at least not clear to the average person), and during that stage, emotions may rule the debate. But if the underlying data is there, I’m certain car companies can change the narrative - just look at how much American hates public transit and jaywalkers.
332. davkan ◴[] No.41901773{5}[source]
This is why we have criminal negligence. Did the mother open a sealed package from the grocery store or did she find an open one on the ground?

Harder to apply to software but maybe there should be a some legal liability involved when a sysadmin uses admin/admin and health information is leaked.

Some employees should be absolutely in jail from boeing regarding the MCAS system and the hundreds of people who died as a result. But the actions there go beyond negligence anyway.

333. suggeststrongid ◴[] No.41902130[source]
> I'd call myself a fairly aggressive driver

This is puzzling. It’s as if it was said without apology. How about not endangering others on the road with manual driving before trying out self driving?

334. snovv_crash ◴[] No.41902224{5}[source]
This is the difference between a Professional Engineer (ie. the protected term) and everyone else who calls themselves engineers. They can put their signature on a system that would then hold them criminally liable if it fails.

Bridges, elevators, buildings, ski lifts etc. all require a professional engineer to sign off on them before they can be built. Maybe self driving cars need the same treatment.

335. snovv_crash ◴[] No.41902255{9}[source]
This isn't a matter of the government saying what you need to do. This is a matter of being held criminally liable if people get hurt.
replies(1): >>41903713 #
336. FeepingCreature ◴[] No.41902386{5}[source]
To be fair, Tesla FSD on German Autobahn = impossible because it's not released yet, precisely because it's not trained for German roads.
replies(1): >>41908133 #
337. _rm ◴[] No.41903272{6}[source]
You made all that up out of nothing. They'd only go to jail if it was intentional.

The only case where a computer programmer "kills someone" is where he hacks into a system and interferes with it in a way that foreseeably leads to someone's death.

Otherwise, the user voluntarily assumed the risk.

Frankly if someone lets a computer drive their car, given their own ample experiences of computers "crashing", it's basically a form of attempted suicide.

338. misiti3780 ◴[] No.41903313{4}[source]
Time will show I'm right and you're wrong.
339. lucianbr ◴[] No.41903350{5}[source]
It's so obviously cherry-picking, I have no idea what you are even thinking. To not be cherry-picking would mean that it's actually ready and works fine in all situations, and there's no way Musk would not shout that out from rooftops and sell it yesterday.

Obviously it works some time on some roads, but not all the time on all the roads. A film with it when it works on the road it works is cherry-picking. Look up what the term means.

340. thunky ◴[] No.41903564{8}[source]
Exactly. Just like most car accidents don't result in prison or death. But negligence or recklessness can do it.
341. A4ET8a8uTh0 ◴[] No.41903713{10}[source]
You are only technically correct. And even then, in terms of civics, by having people held criminally liable government is telling you what to do ( or technically not do ). Note that no other body can ( legally ) do it. In fact, false imprisonment is in itself a punishable offense, but I digress..

Now, we could argue over whether that is/should/could/would be the law of the land, but have you considered how it would be enforced?

I mean, I can tell you first hand what it looks like, when government gives you a vague law for an industry to figure out and an enforcement agency with a broad mandate.

That said, I may have exaggerated a little bit on the algo choice. I was shooting for ghoulish overkill.

replies(1): >>41905305 #
342. dham ◴[] No.41903728{4}[source]
This entire thread is people complaining about automation and FSD. Then you want an advanced feature, that requires a large amount of AI to do, as a toss-in feature to basic adaptive cruise control. Do you realize how far ahead Tesla is to everyone else?
replies(1): >>41906776 #
343. eric_cc ◴[] No.41904056{4}[source]
> tesla bulls

I’m not a Tesla financial speculator.

Consider the effort it would take a normal individual to prove how well their cars FSD works for them. Now consider how somebody with no investment in the technology stands to benefit from that level of effort. That’s ridiculous. If you’re curious about the technology, go spend time with it. That’s a better way to gather data. And then you don’t have to troll forums calling people liars.

replies(1): >>41905397 #
344. freejazz ◴[] No.41905305{11}[source]
> You are only technically correct

You clearly have no idea how civil liability works. At all.

replies(1): >>41905548 #
345. Der_Einzige ◴[] No.41905397{5}[source]
I did! I own a comma! I’ve put in hundreds of hours into using teslas FSD. I know more about it than most in this thread and I repeat, folks who claim it’s as good as they say it is are liars. Yes, even with the 12.6 firmware. Yes, even with the 13.0 firmware that’s about to come out.
346. A4ET8a8uTh0 ◴[] No.41905548{12}[source]
I am here to learn. You can help me by educating me. I do mean it sincerely. If you think you have a grasp on the subject, I think HN as a whole could benefit from your expertise.
replies(1): >>41905564 #
347. freejazz ◴[] No.41905564{13}[source]
Civil liability isn't determined by the "gov't" it's determined by a jury of your peers. More interesting to me is how you came to the impression that you had any idea what you were talking about to the point you felt justified in making your post.
replies(1): >>41905827 #
348. freejazz ◴[] No.41905612{6}[source]
> You’re right FSD isn’t an autonomous driving system

Oh, weird. Are you not aware it's called Full SELF Driving?

replies(1): >>41907254 #
349. A4ET8a8uTh0 ◴[] No.41905827{14}[source]
My friend. Thank you. It is not often I get to be myself lately. Allow me to retort in kind.

Your original response to my response was in turn a response to the following sentence by "snovv_crash":

"This isn't a matter of the government saying what you need to do. This is a matter of being held criminally liable if people get hurt."

I do want to point out that from the beginning the narrow scope of this argument defined the type of liability as criminal and not civil as your post suggested. In other words, your whole point kinda falls apart as I was not talking about civil liability, but about the connection of civics and government's ( or society's depending on your philosophical bent ) monopoly on violence.

It is possible that the word civic threw you off, but I was merely referring to the study of the rights, duties, and obligations of citizens in a society. Surely, you would agree that writing code that kills people would be under the purview of the rights, duties and obligations of individuals in a society?

In either case, I am not sure what are you arguing for here, It is not just that you are wrong, but you seem to be oddly focused on trying to .. not even sure. Maybe I should ask you instead.

<<More interesting to me is how you came to the impression that you had any idea what you were talking about to the point you felt justified in making your post.

Yes, good question. Now that I replied I feel it would not be a bad idea ( edit: for you ) to present why you feel ( and I use that verb consciously ) you can just throw salad willy-nilly not only with confidence, but, clearly, justification worthy of a justicar.

tldr: You are wrong, but can you even accept that you are wrong.. now that will be an interesting thing to see.

<< that you had any idea

I am a guy on the internet man. No one has any idea about anything. Cheer up:D

replies(1): >>41906548 #
350. jeffbee ◴[] No.41905959{8}[source]
For most organizations data is a burden rather than a benefit. Tesla has never demonstrated that they can convert data to money, while that is the sole purpose of everything Google has built for decades.
351. andrewaylett ◴[] No.41906386{5}[source]
The point at which we decide that a defect is serious enough to transfer liability is quite case-dependent. If you knew that the throttle was glitchy but hadn't done anything to fix it, yes. If it affected every car from the manufacturer, it's obviously their fault -- but if you ignore the recall then it might be your fault again?

In this case, the behaviour of the system and the responsibility of the driver is well-established. I'd actually quite like it if Tesla were held responsible for their software, but they somehow continue to skirt the line wherein they require the driver to retain vigilance and any system failures are therefore the (legal) fault of the human not the car despite advertising it as "Full Self Driving".

replies(1): >>41906722 #
352. freejazz ◴[] No.41906548{15}[source]
In a criminal court, guilt (not liability) is also determined by a jury of your peers, and not the gov't.
353. dragonwriter ◴[] No.41906722{6}[source]
> The point at which we decide that a defect is serious enough to transfer liability is quite case-dependent. If you knew that the throttle was glitchy but hadn't done anything to fix it, yes. If it affected every car from the manufacturer, it's obviously their fault -- but if you ignore the recall then it might be your fault again?

In most American jurisdictions' liability law, the more usual thing is to expand liability, rather than transferring liability. The idea that exactly one -- or at most one -- person or entity should be liable for any given portion of any given harm is a common popular one in places like HN, but the law is much more accepting of the situation where lots of people may have overlapping liability for the same harm, with none relieving the others.

The liability of a driver for maintenance and operation within the law is not categorically mutually exclusive with the liability of the manufacturer (and, indeed, every party in the chain of commerce) for manufacturing defects.

If a car is driven in a way that violates the rules of the road and causes an accident and a manufacturing defect in a driver assistance system contributed to that, it is quite possible for the driver, manufacturer of the driver assistance system, manufacturer of the vehicle (if different from that of the assistance system) and seller of the vehicle to the driver (if different from the last two), among others, to all be fully liable to those injured for the harms.

354. Dylan16807 ◴[] No.41906776{5}[source]
A large amount of AI to shift slightly forward or backward based on turn signals? No.
355. valval ◴[] No.41907254{7}[source]
Does the brand name matter? The description should tell you all you need to know when making a purchase decision.
replies(1): >>41907371 #
356. freejazz ◴[] No.41907371{8}[source]
Yes, a company's marketing is absolutely part of the representations the company makes about a product they sell in the context of a product liability lawsuit.
357. ◴[] No.41908133{6}[source]