←back to thread

410 points jjulius | 4 comments | | HN request time: 0.007s | source
Show context
bastawhiz ◴[] No.41889192[source]
Lots of people are asking how good the self driving has to be before we tolerate it. I got a one month free trial of FSD and turned it off after two weeks. Quite simply: it's dangerous.

- It failed with a cryptic system error while driving

- It started making a left turn far too early that would have scraped the left side of the car on a sign. I had to manually intervene.

- In my opinion, the default setting accelerates way too aggressively. I'd call myself a fairly aggressive driver and it is too aggressive for my taste.

- It tried to make way too many right turns on red when it wasn't safe to. It would creep into the road, almost into the path of oncoming vehicles.

- It didn't merge left to make room for vehicles merging onto the highway. The vehicles then tried to cut in. The system should have avoided an unsafe situation like this in the first place.

- It would switch lanes to go faster on the highway, but then missed an exit on at least one occasion because it couldn't make it back into the right lane in time. Stupid.

After the system error, I lost all trust in FSD from Tesla. Until I ride in one and feel safe, I can't have any faith that this is a reasonable system. Hell, even autopilot does dumb shit on a regular basis. I'm grateful to be getting a car from another manufacturer this year.

replies(24): >>41889213 #>>41889323 #>>41889348 #>>41889518 #>>41889642 #>>41890213 #>>41890238 #>>41890342 #>>41890380 #>>41890407 #>>41890729 #>>41890785 #>>41890801 #>>41891175 #>>41892569 #>>41894279 #>>41894644 #>>41894722 #>>41894770 #>>41894964 #>>41895150 #>>41895291 #>>41895301 #>>41902130 #
TheCleric ◴[] No.41890342[source]
> Lots of people are asking how good the self driving has to be before we tolerate it.

There’s a simple answer to this. As soon as it’s good enough for Tesla to accept liability for accidents. Until then if Tesla doesn’t trust it, why should I?

replies(9): >>41890435 #>>41890716 #>>41890927 #>>41891560 #>>41892829 #>>41894269 #>>41894342 #>>41894760 #>>41896173 #
bdcravens ◴[] No.41890927[source]
The liability for killing someone can include prison time.
replies(3): >>41891164 #>>41894710 #>>41896926 #
TheCleric ◴[] No.41891164[source]
Good. If you write software that people rely on with their lives, and it fails, you should be held liable for that criminally.
replies(11): >>41891445 #>>41891631 #>>41891844 #>>41891890 #>>41892022 #>>41892572 #>>41894610 #>>41894812 #>>41895100 #>>41895710 #>>41896899 #
dmix ◴[] No.41891631[source]
Drug companies and the FDA (circa 1906) play a very dangerous and delicate dance all the time releasing new drugs to the public. But for over a century now we've managed to figure it out without holding pharma companies criminally liable for every death.

> If you write software that people rely on with their lives, and it fails, you should be held liable for that criminally.

Easy to type those words on the internet than make it a policy IRL. That sort of policy IRL would likely result in a) killing off all commercial efforts to solve traffic deaths via technology and vast amounts of other semi-autonomous technology like farm equipment or b) government/car companies mandating filming the driver every time they turn it on, because it's technically supposed to be human assisted autopilot in these testing stages (outside restricted pilot programs like Waymo taxis). Those distinctions would matter in a criminal court room, even if humans can't always be relied upon to always follow the instructions on the bottle's label.

replies(3): >>41892028 #>>41892069 #>>41893456 #
ryandrake ◴[] No.41892028{3}[source]
Your take is understandable and not surprising on a site full of software developers. Somehow, the general software industry has ingrained this pessimistic and fatalistic dogma that says bugs are inevitable and there’s nothing you can do to prevent them. Since everyone believes it, it is a self-fulfilling prophecy and we just accept it as some kind of law of nature.

Holding software developers (or their companies) liable for defects would definitely kill off a part of the industry: the very large part that YOLOs code into production and races to get features released without rigorous and exhaustive testing. And why don’t they spend 90% of their time testing and verifying and proving their software has no defects? Because defects are inevitable and they’re not held accountable for them!

replies(4): >>41892592 #>>41892653 #>>41893464 #>>41893804 #
1. everforward ◴[] No.41892653{4}[source]
It is true of every field I can think of. Food gets salmonella and what not frequently. Surgeons forget sponges inside of people (and worse). Truckers run over cars. Manufacturers miss some failures in QA.

Literally everywhere else, we accept that the costs of 100% safety are just unreasonably high. People would rather have a mostly safe device for $1 than a definitely safe one for $5. No one wants to pay to have every head of lettuce tested for E Coli, or truckers to drive at 10mph so they can’t kill anyone.

Software isn’t different. For the vast majority of applications where the costs of failure are low to none, people want it to be free and rapidly iterated on even if it fails. No one wants to pay for a formally verified Facebook or DoorDash.

replies(1): >>41893620 #
2. kergonath ◴[] No.41893620[source]
> Literally everywhere else, we accept that the costs of 100% safety are just unreasonably high.

Yes, but also in none of these situations would the consumer/customer/patient be held responsible. I don’t expect a system to be perfect, but I won’t accept any liability if it malfunctions as I use it the way it is intended. And even worse, I would not accept that the designers evade their responsibilities if it kills someone I know.

As the other poster said, I am happy to consider it safe enough the day the company accepts to own its issues and the associated responsibility.

> No one wants to pay for a formally verified Facebook or DoorDash.

This is untenable. Does nobody want a formally verified avionics system in their airliner, either?

replies(1): >>41895466 #
3. everforward ◴[] No.41895466[source]
You could be held liable if it impacts someone else. A restaurant serving improperly cooked chicken that gives people E Coli is liable. Private citizens may not have that duty, I’m not sure.

You would likely also be liable if you overloaded an electrical cable, causing a fire that killed someone.

“Using it in the way it was intended” is largely circular reasoning; of course it wasn’t intended to hurt anyone, so any usage that does hurt someone was clearly unintended. People frequently harm each other by misusing items in ways they didn’t realize were misuses.

> This is untenable. Does nobody want a formally verified avionics system in their airliner, either?

Not for the price it would cost. Airbus is the pioneer here, and even they apply formal verification sparingly. Here’s a paper from a few years ago about it, and how it’s untenable to formally verify the whole thing: https://www.di.ens.fr/~delmas/papers/fm09.pdf

Software development effort generally tends to scale superlinearly with complexity. I am not an expert, but the impression I get is that formal verification grows exponentially with complexity to the point that it is untenable for most things beyond research and fairly simple problems. It is a huge pain in the ass to do something like putting time bounds around reading a config file.

IO also sucks in formal verification from what I hear, and that’s like 80% of what a plane does. Read these 300 signals, do some standard math, output new signals to controls.

These things are much easier to do with tests, but tests only check for scenarios you’ve thought of already

replies(1): >>41898516 #
4. kergonath ◴[] No.41898516{3}[source]
> You could be held liable if it impacts someone else. A restaurant serving improperly cooked chicken that gives people E Coli is liable. Private citizens may not have that duty, I’m not sure. > You would likely also be liable if you overloaded an electrical cable, causing a fire that killed someone.

Right. But neither of these examples are following guidelines or proper use. If I turn the car into people on the pavement, I am responsible. If the steering wheel breaks and the car does it, then the manufacturer is responsible (or the mechanic, if the steering wheel was changed). The question at hand is whose responsibility it is if the car’s software does it.

> “Using it in the way it was intended” is largely circular reasoning; of course it wasn’t intended to hurt anyone, so any usage that does hurt someone was clearly unintended.

This is puzzling. You seem to be conflating use and consequences and I am not quite sure how you read that in what I wrote. Using a device normally should not make it kill people, I guess at least we can agree on that. Therefore, if a device kills people, then it is either improper use (and the fault of the user), or a defective device, at which point it is the fault of the designer or manufacturer (or whoever did the maintenance, as the case might be, but that’s irrelevant in this case).

Each device has a manual and a bunch of regulations about its expected behaviour and standard operating procedures. There is nothing circular about it.

> Not for the price it would cost.

Ok, if you want to go full pedantic, note that I wrote “want”, not “expect”.