Most active commenters
  • hmmm-i-wonder(4)
  • ajross(3)

←back to thread

Waymos crash less than human drivers

(www.understandingai.org)
345 points rbanffy | 14 comments | | HN request time: 0.275s | source | bottom
1. hmmm-i-wonder ◴[] No.43493436[source]
Is crash the best indicator of success?

I know some really bad drivers that have almost no 'accidents', but have caused/nearly caused many. The cut off others, get confused in traffic and make wrong decisions etc...

Waymos, by media attention at least, have a habit of confusion and other behaviour that is highly undesired (one example going around a roundabout constantly) but that doesn't qualify as a 'crash'.

replies(3): >>43493571 #>>43493603 #>>43493809 #
2. ajross ◴[] No.43493571[source]
> Is crash the best indicator of success?

Well, yeah? Or rather, if it's not, then I think the burden of proof is on the person making that argument.

Even taking your complaints at their maximum impact: would you rather be delayed by a thousand confused robots or run over by one certain human?

replies(2): >>43493751 #>>43494141 #
3. sbuttgereit ◴[] No.43493603[source]
I expect that the media don't find stories of Waymos successfully moving from point A to B without incident nearly so compelling as those cases where that doesn't happen.

I experience Waymo cars pretty much every time I drive somewhere in San Francisco (somewhat frequent since I live there). Out of hundreds of encounters with cars I can only think of a single instance where I thought, "what is that things doing?!"... And in that case it was a very busy 5 way intersection where most of the human driven cars were themselves violating some rule trying to get around turning vehicles and such. When I need a ride, I can also say I'm only using Waymo unless I going somewhere like the airport where they don't go; my experience in this regard is I feel much more secure in the Waymo than with Lyft or Uber.

4. potato3732842 ◴[] No.43493751[source]
>Even taking your complaints at their maximum impact: would you rather be delayed by a thousand confused robots or run over by one certain human?

Depending on the relative rates and costs of each type of mishap it could go either way. There is a crossover point somewhere.

The fact that you're coming right out the gate with a false dichotomy and appeal to emotion on top tells me that deep down you know this.

replies(1): >>43493868 #
5. perlgeek ◴[] No.43493809[source]
If you follow Utilitarian ethics, just ask yourself: how much (negative) utility do you assign to...

* a crash with a fatality

* a crash with an injury

* any crash at all

* a driverless car going around a roundabout constantly

for me, the answer is pretty clear: crashes per distance traveled remains the most important metric.

replies(1): >>43494475 #
6. mlyle ◴[] No.43493868{3}[source]
> There is a crossover point somewhere.

I think his point is explicitly that that crossover point is rather high.

And let's not forget that crashes, in addition to their other costs, do cause significant delays themselves.

7. indiosmo ◴[] No.43494141[source]
I suppose the argument is that while the robot itself might not have run over anyone, it might have caused someone else on the road to do it.

So if we're just measuring how many crashes the robot has been involved in, we can't account for how many crashes the robot indirectly caused.

replies(1): >>43494272 #
8. ajross ◴[] No.43494272{3}[source]
> I suppose the argument is that while the robot itself might not have run over anyone, it might have caused someone else on the road to do it.

And I repeat, that's a contrived enough scenario that I think you need to come to the table with numbers and evidence if you want to make it. Counting crashes has been The Way Transportation Safety Has Been Done for the better part of a century now and you don't just change methodology midstream because you're afraid of the Robot Overlord in the driver's seat.

Science always has a place at the table. Ludditism does not.

replies(1): >>43494560 #
9. danaris ◴[] No.43494475[source]
This is utterly missing the point of the parent.

Just because this car doesn't crash, that doesn't mean it doesn't cause crashes (with fatalities, injuries, or just property damage), and that's inherently much harder to measure.

You can only develop an effective heuristic function if you are actually taking into account all the meaningful inputs.

replies(1): >>43494700 #
10. hmmm-i-wonder ◴[] No.43494560{4}[source]
I wouldn't say its contrived, but I agree its important to take such questions and back them up with data.

My question is open in that we don't really HAVE data to measure that statement in any meaningful way. The proper response is "that could be valid, we need to find a way to measure it".

Resorting to calling me a luddite because I question whether a metric is really an accurate measure of success (one that I apply to HUMAN drivers as an example first...) really doesn't reflect any sort of scientific approach or method I'm aware of, but feel free to point me to references.

replies(1): >>43512055 #
11. vpribish ◴[] No.43494700{3}[source]
Of course you are the best kind of correct but to instead advance the whole discussion I think we can agree that there is no trail of carnage in the wake of waymos leaving only them unscathed.

I live in sf. Waymos are far more predictable and less reckless than the meatwagons. They do not cause accidents with their occasionally odd behavior.

And to add another perspective - as a cyclist and pedestrian I put waymos even further ahead. I have had crashes due to misbehavior of cars - specifically poor lane keeping around curves - but waymos just don’t cause those sorts of problems

replies(1): >>43495262 #
12. hmmm-i-wonder ◴[] No.43495262{4}[source]
This is exactly the question I was asking, thanks for your input. I know the highlighted examples of 'bad behaviour' in waymo's are somewhat sensationalized, but its hard to quantify how that translates to impacting other users of the environment. I agree humans are very prone to this sort of bad driving, which was what made me ask the question about waymo to begin with. Specifically things like lane keeping, impacts on cyclists pedestrians etc... things you mention.

Although my brain can't help but see waymo's more as "Meatwagons" than human driven cars, I get your point :)

It would be curious to see relative levels of driver assist and its impacts on things like that outside of crash and injury statistics from crashes, but it would be very hard to measure and quantify.

13. ajross ◴[] No.43512055{5}[source]
> The proper response is "that could be valid, we need to find a way to measure it".

With all respect, no. You don't treat every possible hypothesis as potentially valid. That's conspiracy logic. Valid ideas are testable ones. If you're not measuring it, then the "proper" response is to keep silent, or propose an actual measurement.

And likewise a proper response is emphatically not to respond to an actual measurement within a commonly accepted paradigm (c.f. the linked headline above) with a response like "well this may not be measuring the right thing so I'm going to ignore it". That is pretty much the definition of ludditism.

replies(1): >>43545417 #
14. hmmm-i-wonder ◴[] No.43545417{6}[source]
>You don't treat every possible hypothesis as potentially valid.

Wrong, you consider and reject hypothesis if we're being specific about the scientific method. In this case, this is a testable question that could be measured but from common metrics isn't accurately measured for humans to compare to. There is no valid rejection of the hypothesis without more data.

The utility of the hypothesis and the work required is one of many things considered and another discussion.

But actually considering and discussing them IS the scientific and rational method. Your knee-jerk reactions are the closest thing to ludditism in this whole conversation.

>And likewise a proper response is emphatically not to respond to an actual measurement within a commonly accepted paradigm (c.f. the linked headline above) with a response like "well this may not be measuring the right thing so I'm going to ignore it". That is pretty much the definition of ludditism.

Again wrong. In almost every case the correct first questions are "are we measuring the right thing", again if we are talking about engineering and science, that's always valid and should ALWAYS be considered. I also never said we should IGNORE crashes, I asked if its the BEST metric for success on its own.

And for your third incorrect point

>That is pretty much the definition of ludditism.

Obviously missed my point in every posts, including the one above. Whether "crashes" is the best metric is being applied to humans and technology, there is no anti-technology going on here.

Your emotional reaction to someone questioning something you obviously care about seems to have shut down your logical brain. Take a deep breath and just stop digging.