> This is slightly worse than last September, when Waymo estimated an 84 percent reduction in airbag crashes over Waymo’s first 21 million miles.
nitpick: Is it really slightly worse, or is it "effectively unchanged" with such sparse numbers? At a glance, the sentence is misleading even though it might be correct on paper. Could've said: "This improvement holds from last September..."
But Waymo vehicles were recording and tracking all the traffic around them, so they ended up out-of-the-starting-gate with more accurate collision numbers by running a panopticon on drivers on the road.
I don't see myself using any of these any time soon, I tend to drive and walk everywhere and don't see much point to paying someone else to drive barring extenuating circumstances. But assuming actual cost benefits are delivered to customers this might be pretty exciting.
The issue with self-driving is (1) how it generalises across novel environments without "highly-available route data" and provider-chosen routes; (2) how failures are correlated across machines.
In safe driving failures are uncorrelated and safety procedures generalise. We do not yet know if, say, using self-driving very widely will lead to conditions in which "in a few incidents" more people are killed in those incidents than were ever hypothetically saved.
Here, without any confidence intervals, we're told we've saved ~70 airbag incidents in 20 mil miles. A bad update to the fleet will easily eclipse that impact.
There's also historical data. So if you saw a spike in crashes for regular vehicles after Waymo arrives, it would be sus. But there is no such spike. Further evidence Waymo isn't causing problems for non AVs.
Of course anything is possible. But it's unlikely.
>So that’s a total of 34 crashes. I don’t want to make categorical statements about these crashes because in most cases I only have Waymo’s side of the story. But it doesn’t seem like Waymo was at fault in any of them.
Why is (1) an issue? Route data never gets worse.
Like downtown San Francisco?
Any time there is a detour, or a construction zone, or a traffic accident, or a road flooded, or whatever else your route data is not just "worse" it is completely wrong
Waymo, as far as I recall, relies on pretty active route mapping and data sharing -- ie., the cars arent "driving themselves" in the sense of discovering the environment as a self-driving system would.
For example, imagine that Waymo is (somehow) far far far superhuman in it's ability to avoid other cars doing dumb/bad things. It has a dramatic reduction in overall accidents because it magically can completely get rid of accidents where the other driver is at fault. But, in some very specific circumstances, it can't figure out the proper rate to slow down at intersections, and it consistently rear ends vehicles in front of it. This specific situation is very rare, so overall accidents still are low (much lower than human drivers), but, in our made up, constructed (and extremely non-sensical) hypothetical, nearly 100% of Waymo accidents are Waymos fault.
So I don't think it's ridiculous to ask how many of the accidents Waymo has been involved in are the fault of the Waymo vehicle. It turns out that (assuming Waymo's side of the story is to be trusted), almost none of them are their fault, but it didn't have to be that way, even in the case where Waymo accidents were more rare than human accidents.
Because with waymos anything learned by one is immediately available to all of them. Therefore you have lots of miles but effectively one driver. Now if you take human drivers and pick only the best, how do you numbers look then? Many people have driven lifetimes and not been in an accident or not in an accident that was their fault or reasonable predictable and avoidable by them.
Driving needs to be considered a lot more serious of a task than it currently is. There are many driving errors that are the primary cause of accidents but those people are never forced to get extended drivers education. Self driving cars are nice but if the goal is to prevent accidents it's a bit like requiring everyone to order prepared food to prevent injury while cooking.
At some point we need to get back to shaming those who are bad at something and requiring that they improve their skills rather that restricting everyone else's freedoms.
Reading your comment before the article, my first thought was that "on the same roads" must mean literally the same roads - right?
But the article actually says:
> Using human crash data, Waymo estimated that human drivers on the same roads would get into 78 crashes
I agree that this is unclear. What data did they use, and why did they have to estimate at all? Shouldn't they be able to get the actual data for how many human drivers got into such accidents on this same exact set of roads over this same exact time period?
Meaning, humans choosing to drive in more difficult conditions probably means they sometimes drive in conditions that they shouldn't.
That's also an issue with humans though. I'd argue that traffic usually appears to flow because most of the drivers have taken a specific route daily for ages - i.e., they are not in a novel environment.
When someone drives a route for the first time, they'll be confused, do last-minute lane changes, slow down to try to make a turn, slow down more than others because because they're not 100% clear where they're supposed to go, might line up for and almost do illegal turns, might try to park in impossible places, etc.
Even when someone has driven a route a handful of times they won't know and be ready for the problem spots and where people might surprise they, they'll just know the overall direction.
(And when it is finally carved in their bones to the point where they're placing themselves perfectly in traffic according to the traffic flow and anticipating all the usual choke points and hazards, they'll get lenient.)
We're probably well past the point where removing all human-driven vehicles (besides bikes) from city streets and replacing them with self-driving vehicles would be a net benefit for safety, congestion, vehicle utilization, road space, and hours saved commuting, such that we could probably rip up a bunch of streets and turn them into parks or housing and still have everyone get to their destinations faster and safer.
The future's here, even if it still has room for improvement.
Man Tests If Tesla Autopilot Will Crash Into Wall Painted to Look Like Road https://futurism.com/tesla-wall-autopilot
The problem with machines-following-rules is that they're trivially susceptible to violations of this kind of safety. No doubt there are mitigations and strategies for minimising risk, but its not avoidable.
The danger in our risk assessment of machine systems is that we test them under non-adversarial conditions, and observe safety --- because they can quickly cause more injury than they have ever helped.
This is why we worry, of course, about "fluoride in the water" (, vaccines, etc.) and other such population-wide systems... this is the same sitation. A mass public health programme has the same risk profile.
Honestly, at this point I am more interested in whether they can operate their service profitably and affordably, because they are clearly nailing the technical side.
For example data from a 100 driver study, see table 2.11, p. 29. https://rosap.ntl.bts.gov/view/dot/37370 Roughly the same number of drivers had 0 or 1 near-crashes as had 13-50+. One of the drivers had 56 near crashes and 4 actual crashes in less than 20K miles! So the average isn't that helpful here.
https://www.msn.com/en-us/technology/tech-companies/waymo-ve...
You've a very narrow definition of novel, which is based soley on incidental features of the environment.
For animals, a novel situation is one in which their learnt skills to adapt to the environment fail, and have to acquire new skills. In this sense, drivers are rarely in novel environments.
For statistical systems, novelty can be much more narrowly defined as simply the case where sensory data fails a similar-distribution test with historical data --- this is vastly more common, since the "statistical profile of historical cases, as measured, in data" is narrow.. whilst the "situations skills apply to" is wide.
An example definition of narrow/wide, here: the amount of situations needed to acquire safety in the class of similar environments is exponential for narrow systems, and sublinear for wide ones. ie., A person can adapt a skill in a single scenario, whereas a statistical system will require exponentially more data in the measures of that class of novel scenarios.
I imagine that will require a viable alternative. Right now they're in some senses superior to ride-sharing/taxis as they are (potentially) safer, and cleaner. So there's no real market pressure forcing them to lower prices beyond that of uber et-al.
I'd think congestion would go up as AVs become more popular, with average occupancy rates per vehicle going down. Since some of the time the vehicle will be driving without any passengers inside. Especially with personally owned AVs. Think of sending a no-human-passenger car to pick up the dog at the vets office. Or a car circling the neighborhood when it is inconvenient to park (parking lot full, expensive, whatever).
I imagine this route data is an extra extra safeguard which allows them to quantify/measure the risk to an extent and also speed up journey's/reduce level of interventions.
On net, yes, they are sensitive to features of the environment and via central coordination maintain a safe map of it.
The mechanism there heavily relies on this background of sharing, mapping, and route planning (and the like) -- which impacts on the ability of these cars to operate across all driving environments.
Can you provide some examples of what you mean?
It also couldn’t operate on the highway so the transit time was nearly double.
One shouldn’t underestimate how economical real human operators are. It’s not like Uber drivers make a ton of money. Uber drivers often have zero capital expense since they are driving vehicles they already own. Waymo can’t share the business expense of their vehicles with their employees and have them drive them home and to the grocery store.
I’m sure it’ll improve but this tells me that Waymo’s price per vehicle including all the R&D expenses must be astronomical. They are burning $2 billion a year at the current rate even though they have revenue service.
Plus, they actually have a lot of human operators to correct issues and talk to police and things like that. Last number I found on that was over one person per vehicle but I’m not sure if anyone knows for sure.
They compile human crash data from various sources (NHTSA, state data). But they are at a city level, not specific streets or areas. So they adjust the human benchmarks to be more representative of the areas where Waymo operates.
Sure I would love to read a book while car is driving me to visit family in the countryside but practically I need city transportation to work and back, to supermarkets and back where I don’t have to align to a bus schedule and have 2-3 step overs but plan my trip 30 min in advance and have direct pick up and drop off.
If that would be possible then I see value in not owning a car.
The only time I take Uber in the bay area is to the airport (and when they approve Waymo for SFO I won't take Uber then either).
That's literally an edge case. For shorter trips, I've found it to be slightly cheaper (especially factoring in the lack of tips) with maybe a slightly longer wait.
Well. If you let people get away with murder by saying "Oh, I'm a confused little old lady, I thought the gas pedal was the brake." That's what I mean by robust enforcement. It is 100% not at all about data.
Consider this: A Waymo could be provably confused about a situation, we can directly peer into the state of its mind and prove that it is confused, despite driving safer than the average. Do you know what would happen to the Waymo program if one of its vehicles careened into a family of pedestrians and killed them all?
> something something about skills
Agency is kind of a myth. For human drivers, experts agree that road design has the biggest impact. Experts also think we should have better public transport.
The problems are bigger than Department of Unenactable Urban Fantasy or American Victim Blaming Institute. Self driving cars are exciting because they are like a cheat code around all these intractable social issues.
Cars are also the least utilized asset class, being parked 95% of the time [2].
AVs, by virtue of being able to coordinate fleet-wide and ability to park anywhere rather than only one's home or destination, would be able to gain incredible efficiencies relative to status quo.
Atop those efficiencies, removing both the constraint of having a driver and the constraint of excessive safety systems to make up for human inattentiveness means AVs can get drastically smaller as vehicles, further improving road utilization (imagine lots of 1- and 2-seaters zipping by). And roads themselves can become narrower because there is less room for error with AVs instead of humans.
Finally, traffic lights coordinating with fleets would further reduce time to destination (hurry up and finish).
Self-driving vehicles give us the opportunity to rethink almost all of our physical infrastructure and create way more human-friendly cities.
[1] http://shoup.bol.ucla.edu/PrefaceHighCostFreeParking.pdf
I’ve also seen that, although Uber and Lyft peak times seem correlated to each other, they seem uncorrelated to Waymo peak activity. But this might be stabilizing as Waymo ridership increases.
Insurance risk estimates are practical simplifications instead of trying to model how risky an individual driver is on a particular day at a particular time, in a particular area. Waymo's results are trying to compare to a statistically average driver with all other variables controlled.
Being better than "average" is a laughably low bar for self-driving cars. Average drivers include people who drive while drunk and on drugs. It includes teenagers and those who otherwise have very little experience on the road. It includes people who are too old to be driving safely. It includes people who are habitually speed and are reckless. It includes cars that are mechanically faulty or otherwise cannot be driven safely. If you compile accident statistics the vast majority will fall into one of these categories.
For self driving to be widely adopted the bare minimum bar needs to be – is it better than the average sensible and experienced driver?
Otherwise if you replace all 80% of the good drivers with waymos and the remaining 20% stay behind the wheel, accident rates are going to go up not down.
I personally have doubts as to whether this dataset exists. Whenever there's an accident, and one party is determined to be at fault, would that party be automatically considered not to be a sensible driver?
If we don't have such a dataset, perhaps it would be impossible to measure self-driving vehicles against this benchmark?
Sorting people by past behavior runs into survivorship bias when looking back and people who stop being sensible going forward. I’m personally a poor driver, but I don’t drive much so my statistics still look good.
How so? As you increase the density of stops in a bus network and increase the rate of arrivals, there will be fewer passengers per bus, going on journeys that approach the fastest they could be. Why look at the one thing that has a really good chance of "fast, cheap and good" and say, "lack foresight?" My dude, it's the only game in town!
Self-driving being the dominant form of driving is now a done deal, thanks to Waymo (and probably Tesla, though that's a policy failure imo), it's just a question of how long it takes.
My money is on a decade.
Highway is coming.
And scale will make it cheaper. It's only cheaper than Uber sometimes currently. That will change.
What happens if an upstart self-driving competitor promises human-level ETAs? Is a speeding Waymo safer than a speeding human?
We might drive every now and then, but come on, do you really think once this is ubiquitous and you can get in a (or your) car and then play a video game, take a nap, text on your phone, or doomscroll, that we're still going to want to drive all the time?
Nah.
Given that Waymo is testing in Tokyo now (and formerly Cruise), autonomous vehicles are clearly not mutually exclusive with well funded public transit.
In the water test, Rober has the Tesla driving down the center of the road, straddling the double yellow line. Autopilot will not do this, and the internal shots of the car crop out the screen. He almost certainly manually drove the car through the water and into the dummy.
One person tried to reproduce Rober's Wile E. Coyote test using FSD. FSD v12 failed to stop, but FSD v13 detected the barrier and stopped in time.[2]
Lidar would probably improve safety, but Rober's video doesn't prove anything. He decided on an outcome before he made the video.
https://www.mitre.org/news-insights/news-release/parts-annou...
This would vastly reduce the number of accident scenarios, be more efficient, and be much easier to automate. And would probably be good enough for 99% of use cases (i.e., work commute).
Obviously I don't seriously see anyone splurging on the infrastructure and bespoke vehicles for that. But I can dream.
You have about 50% more KE at 80mph as you do at 65mph btw, if you find yourself needing to dissipate that energy rapidly.
Let's get back to my main point, that Tesla's not having Lidar is stupid and I don't trust a self-driving car that can't adequately detect solid objects in it's environment
But... that's the reality. If we replace human drivers with self-driving cars at random, or specifically the bad drivers above, then we've improved things.
We are not going to easily improve the average human driver.
> Atop those efficiencies, removing both the constraint of having a driver and the constraint of excessive safety systems to make up for human inattentiveness means AVs can get drastically smaller as vehicles, further improving road utilization (imagine lots of 1- and 2-seaters zipping by). And roads themselves can become narrower because there is less room for error with AVs instead of humans.
The first part is mostly describing taxis, so the incredible efficiencies relative to the status quo can be loosely observed through them. Just subtract out wage and a slight "technological scale" bonus, and you can estimate what it would be. Then add in the expected investor returns for being a technology company and see the improvements disappear.
The second part, I wonder. Cars already average under 2 occupants, with most just being the driver. If this is what is was needed for significantly smaller cars, we would already have them. Lack of smaller cars is mostly a cultural issue, not a technical one.
All of the driver aids that we have in modern cars lane sensors lane keeping, traction control, any lock brakes, and we can continue on with the plethora of driver aids. These are all great and useful tools but they are are being used as a substitute for improving driver skill. When what they should be doing is acting as an aid for driver skill and not a substitution.
Being able to drive a forklift and other heavy machinery on a closed environment work site requires a more training and more recertification than it does to get a driver's license in the United States. That to me is absolutely shocking. Because you have a closed course environment that has closed course regulations in place so there is less opportunity for randomness to occur. On the road nothing is closed about it you have complete randomness that you cannot factor in. So why do we not regularly require people to get retested and understand the rules of the road? I'm not saying tests are the best measure but when you see people who have been driving for two decades and they can no longer pass the written test for getting a driver's license, which is an exceedingly easy test, they should have their license revoked and be required to take driver education. When I say driver education I don't mean the fluff nonsense that we give today. It is fine as an introductory course to teach people the basics of driving. What we really need is an ongoing continual education for drivers to maintain their driver's license. Maybe that means we need a more robust public transportation system as well because there are definitely a lot of people who should not be having a driver's license. They simply do not have the cognitive wherewithal to be able to properly drive.
I suspect if you begin to look at the statistics accidents are caused by the same group of people repeatedly and those skew the numbers for everyone else.
I'm not defending any of those replies to Rober. In fact I find it quite annoying when dogmatic, sneery people happen to share my views. But the content of those replies does not change the content of Rober's videos, nor does it change the content of the video showing FSD passing the test.
> Let's get back to my main point, that Tesla's not having Lidar is stupid and I don't trust a self-driving car that can't adequately detect solid objects in it's environment
In the video I linked to, the self-driving car did adequately detect solid objects in its environment. My main point is that your main point is based on a video that used non-self driving software engaged seconds before collision, edited and published to make people think it was FSD engaged much farther back from a standstill. And at least one other test (the water test) didn't even use autopilot, just manual driving. I don't know why Rober did that, but he did, and it tanks his credibility.
Again, I'm not arguing against lidar. I already said that lidar would probably improve safety. But Rober's video does not show that, as he didn't use Tesla's FSD software. The person who did showed that it stopped successfully.
In a world where lidar greatly improves safety, we would see the latest version of FSD go through the Wile E. Coyote barrier. That didn't happen, so we probably don't live in that world. In a world where lidar improves safety, though not as much, we'd see FSD stop successfully. And in a world where lidar doesn't improve safety (weird I know, but there could be issues with sensor fusion or lidar training data), we'd also see FSD stop successfully. Right now we don't know which of those worlds we live in. And we won't know until someone (probably Tesla) launches a vision-only robo taxi service. Then we can compare accident rates to get an idea of how much lidar improves safety. And if Tesla doesn't have a robo taxi service within the next year, that indicates that cameras alone aren't safe enough to run a robo taxi service.
Any comparison of Waymo's safety should be done against taxis/Uber/Lyft/etc. A comparison with the general driving public could also be interesting, or other commercial drivers, but those are not the most relevant cohorts. I don't know the numbers, but I wouldn't be surprised if taxis/Uber/Lyft are worse per mile than general drivers since they are likely under more stress, and often work for long hours. A Waymo is no less safe at 4am, but a Lyft driver who's been up all night is a lot less safe. I would also guess that they are less likely than the general (auto) driving population to own their vehicle. Depending on who owns a vehicle, how long they've been driving (years), there's going to be a lot of interesting correlations.
If self-driving cars became prevalent, I can absolutely see it leading to an increase in license revocation as a punishment for unsafe driving.
(1) Setting aside one's personal opinion on which is more dangerous to society: people on the sex-offender registry or drunk drivers.
In contrast, every time a flaw is discovered in a self-driving algorithm, the whole fleet of vehicles is one over-the-air update away from getting safer.
I followed Mark Rober on X to learn more about him and possibly understand more about his Tesla tests. Maybe he's a Musk/Tesla hater like Thunderf00t, I don't know. (yes, I'm on X - for entertainment purposes only)
Ideally this would be a municipal fleet and transportation just another utility like water, electrical, and broadband. Admittedly this would require strong political power and vision, as anything that remakes physical infrastructure does.
Agree small cars are a cultural/identity issue tho usually a rather rational one as well, given safety vis-a-vis 7000lb SUVs. However, I don't think people's aversions to spend $20k+ on a city-only vehicle has any bearing on whether they would be willing to be being taken places in one when its the most convenient/safest/fastest way to get places. A city-wide transportation utility obviates most of the need/desire for individual car ownership.
To put it in tech biz terms, everything in tech is bundling or unbundling. Ownership of cars is the unbundled version of transport, and took over due to convenience and creature comforts. Now a new tech has come out that swings the pendulum toward bundled being more convenient & optimal.
Not a lot of 65yo people working 80hr weeks, slogging out 50-100mi commutes or plowing into moose while blinded by the 6am sun on their way back from 3rd shift.
My understanding is that the reverse is basically what happens in reality. Humans can sense that Waymo cars are "sus" and give them a wide berth so their "lawful to the point of violating the expectations of other drivers" behavior mostly doesn't cause problems, but when it does the other guy pays.
They're dominated by normal drivers who had a momentary lapse in judgment or attention. This is why running a police state that goes hard on DUI and vehicle inspections doesn't make the roads as much safer as its proponents would leave you to believe.
This is basically the same adoption path as every other labor saving tech.
Currently, on Wednesday March 26th at 8:34 a ride from Bar Part Time in the Mission to Verjus in North Beach is $21.17 with a estimated 8 minute pickup time. The same ride on UberX has an estimated 2 minute pickup time at a cost of $15.34. I could see it being cheaper if you top 20% - but I don't tip nearly that high on Uber rides.
I will admit that I could possibly be self-selecting to peak times as I own a car in the city, so I only use ride share in the evenings; so it may very well be the case that the price/wait is more competitive at off-peak hours.
Furthermore, it's quite surprising to me that it seems that the human labor cost doesn't affect the price at all. The only price controls seems to be demand and the latent demand is enough to create a price floor where there is always a human that is willing to drive. It also seems like plain old logistics and traffic will prevent Waymo from providing enough supply to offer dirt cheap rides. The fact that a ride that would have cost me $5 in 2016 is almost 4x as much with "magic self driving technology" is not something I could have told my 2016 self.
Ex: "X% of humans do better than Waymo does in accidents per mile."
That would give us an intuition for what portion of humans ought to let the machine do the work.
P.S.: On the flip-side, it would not tell us how often those people drove. For example, if the Y% of worse-drivers happen to be people who barely ever drive in the first place, then helping automate that away wouldn't be as valuable. In contrast, if they were the ones who did the most driving...
Not all Waymo riders actually want the premium cars and we can’t assume that’s why they are choosing Waymo.
We have to assume that some and perhaps most riders would prefer to pay less to ride in a cheaper car but are mainly choosing Waymo because its autonomous (cool factor, the no-human factor).
Also, California mandates autonomous vehicles be fully electric by 2030. So Waymo literally has to be driving some kind of EV to comply very soon.
Jaguar’s I-pace was a poor-selling EV SUV from a struggling company with a lot of leftover inventory, so it’s almost a guarantee that Waymo got a great fleet deal on them.
So far, every time there's been self driving car progress, someone's been like, "okay yeah, but can they do <the next thing they're working on> yet??" like some weird gotcha. Tech progress is incremental, shocking I know.
Not saying that they wouldn't play a role in a functional public transport system, they'd be invaluable for the last two miles from your station to your destination.
But while our people transporting systems prioritise roads and cars, we will never have the high quality and safe public transport that high quality of life cities thrive on.
(And while I write this from NZ, with only limited experiences of LA and SF, we copied America, we went for sprawl and freeways, and it's strangling our largest city.)
I know and spend time with people who live in Berlin, Munich and Hamburg, that don't own a car, because they don't need to own a car.
They might rent one for a holiday into Italy, or they might use an app like Lime / Bird etc. to rent very short term a tiny car like a BMW i3 for a big grocery shop.
But because their cities are dense, and mix commercial with residential (e.g., ā bunch of 5 storey apartment buildings with the ground/first floor being commercial, depending on where you are), they can often buy groceries at the local market on foot on their way home from the U-Bahn, or head down to the local Getränkhandel on a bike with a basket or two to buy their beer and bottled water.
Centralising commerce away from residential, especially with big box shopping areas, is predicated on car culture, and bakes in the need for cars.
TL;DR self-driving vehicles alone are a band-aid over an unsustainable transport culture and strategy.
But they'll form a critical part of a sustainable one.
Insurance companies can’t know your future behavior so must hedge for a percentage of future idiots being in any bucket. On the flip side some people in the multiple DUI bucket end up driving sensibly over the next 6 months.
But that's the OPs point, we aren't. Waymo crashing less than human drivers is a tautological result because Waymo is only letting the cars drive on roads where they're confident they can drive as well as humans to begin with.
If you actually ran the (very unethical) experiment of replacing a million people at random on random streets tomorrow with waymo cars you're going to cause some carnage, they only operate in parts of four American cities.
A small fender bender is common in human drivers. A catastrophic crash (like t-boning into a bus) is rare (it'd make the news for example).
Autodriving, on the other hand, almost never makes fender benders. But they do t-bone into busses in rare occasions - which also makes the news.
(Yeah, I know it means putting an actuarial cost on a human life, but statistics means mathing things up.)
If you can visually detect the painted wall, what makes you think that cameras on a Tesla can't be developed to do the same?
And are deliberately deceptive road features actually a common enough concern?
That's a ridiculous scenario. If anything, impaired drivers should be more likely to choose an automated driving option. But no need to to even assume that. The standard that matters is replacing the average.
My brother, you are describing a train.
> I'm imagining vehicles being two/four person pods zipping around.
Oh, never mind. Yeah, the reason that doesn't work is that if you're going to spend the money to build tram tracks or zip lines or whatever predefined path, then it is only economical to make a few set predefined paths. If you make a predefined path for every possible commute in the city, you will run out of money before getting very far. So ultimately, I would say a train is the closest thing to your ideal.
Melbourne has tram tracks throughout the city centre and the cars move about the trams.
This is just a joke about stereotypes btw. As for my actual opinion, I believe that pretty much every adult American needs to drive (unless they live in a big city), so trying to DQ anyone is effectively denying them the right to live independently. Rather, driving must be made easier with better technology and hopefully AVs.
I hadn't thought of it until just now, but I guess that means the average driver is a little drunk and a little high. Kinda like how the average person has less than 2 arms.
There would be a strong argument to simply banning the worst 1% of drivers from driving, and maybe even compensating them with lifetime free taxi rides, on the taxpayers dime.
I would wager that those 20% of drivers also are disproportionally under the influence of drugs, impaired in any way (i.e., stroke, heart attack, etc), or experiencing sudden unexpected events such as equipment malfunction.
Defensive driving is risk mitigation.
I see this whole thing is a business viability narrative wherein Tesla would be even further under water if they were forced to admit that LiDAR may possess some degree of technical superiority and could provide a reliability and safety uplift. It must have taken millions of dollars in marketing budget to erase the customer experiences around the prior models of their cars that did have this technology and performed accordingly.
That's such a silly statement. One shouldn’t underestimate how UNeconomical real humans are.
In the past 12,000 years, human efficiency has improved, maybe, 10x. In the past 100 years, technological efficiency has improved, maybe, 1,000,000x.
Any tiny technological improvement can be instantly replicated and scaled. Meanwhile, every individual human needs to be re-trained and re-grown. They're extremely temperamental, with expensive upkeep, very short lifespans and even shorter productive lifespans.
In fact, humans have improved so little, that every time, they scoff at the new technology and say it will never take off, and they're still doing it 12,000 years later, right now, right above this post.
That said, I do think using only visual cues is a stupid self-imposed restriction. We shouldn't be making self-driving cars like humans, because humans suck horse testicles at driving.
1. People who normally take the bus are not incentivise to get their driving license /make a big accident
2. People already driving are still blt rewarded ,just not blocked
3. One may argue that if some of the borderline "not that dangerous but still..." driver do it on purpose to cross the line it still may benefits soxiety economically wise
(And a lot of provinces do bans measured in days if you hit 0.05)
In collisions that don't involve pedestrians, the damage to the car/object is generally proportional to the chance that someone was badly injured or killed in those cases - the only thing you get by adding human life costs is to take into account the quality of the safety features of the cars being driven, which should be irrelevant for nay comparison with automated driving. In collisions that do involve pedestrians, this breaks down, since you can easily kill someone with almost 0 damage to the car.
So having these two stats per mile driven to compare would probably give you the best chance of a less biased comparison.
Public transit will work for some of the people some of the time. (That is, if it can even be built - highly recommend Ezra Klein's new book Abundance on ways to get out of this).
For the people that public transit won't work for, you need to come up with a new solution if you want to see cars go away: a solution that makes going from A to B some combination of easier, cheaper, faster, more convenient than driving. Or, a solution that brings B closer to A (like changing zoning laws or building cheaper housing in metro areas).
The numbers may look different, but does it matter, though? This is a benchmark, not a competition. Since we can't possibly get every human driver to the level of the best drivers, it's still a win for Waymos.
Yes, past underinvestment is bad. And yes, initially they are a band-aid, until yes they do become critical.
My excitement for self-driving tech isn't about the short term changes, but just how powerful a technology this is in the longer term. Ultimately this tech is not about cars, it's about the ability to automate the movement of mass. This is novel and meaningful.
An obvious medium-term implication of self-driving is that cities will ban human drivers, because that way cities can ditch a bunch of high-cost infrastructure required because of human fallibility. Up until that point, self-driving would be a band-aid. After that point, the dominoes start to fall.
1. Form factors change: cars become 1-4 person pods, stripped of the unnecessary bulk of excessive safety systems and unused capacity.
2. Ownership changes: municipalities will buy fleets of cheap mass-produced pods to replace extremely capex intensive public transport.
3. What is transported changes: now you have shipping drones dropping off standardized (reusable) packages into standardized intakes. Think The Box [1] but smaller.
4. Infrastructure changes: Roads narrow, parking becomes drop-off spots, larger cafes, actual parks. Cut and cover roads multiply, leaving more space above ground for people. Cities grow 20% without getting bigger, just by obviating the need for half their roads. The blight of various parking signs and warnings to drivers disappear. People can walk about freely or ride their bikes. It's quieter. The air quality improves.
5. Housing changes: Garages transform into rooms. People ditch bulky refrigerators in favor of ordering drone-delivered fresh produce in minutes. Drones deliver upstairs not just at street level. Pods become elevators. We've seen all this in science fiction... guess what the enabling technology is?
If you extend the implications of the automated movement of mass, the logical conclusion is the physical infrastructure of the city will transform to take advantage of every gain that creates. Cities dedicate 25-40%+ of their land mass to roads. In dense urban cores, 20% of their land mass is just parking spots. We can't route people-driven cars underground unless we really really mean it and build a highway. We waste a huge amount of space on transportation. We also shape all of our buildings around the constraints imposed by car-shaped objects and all their various externalities, including noise and air pollution.
My belief is that self-driving is easily the most transformative tech to hit cities since the car, and may exceed the impact that cars have had on the built world.
At some point you’ll see a car careen into the side of the curb across three lanes due to slick and you’ll be like ehhh I’ll just cut through with this route and move on about your day.
After driving for 20 years, about the only time I got scared in a novel situation was when I was far from cell service next to a cliff and sliding a mountain fast in deep mud running street tires due to unexpected downpour in southern Utah. I didn’t necessarily know what to do but I could reason it out.
I don’t really find “using a new route” difficult at all. If I miss my exit, I’m just going to keep driving and find a U-turn — no point to stress over it.
Expect to pay for the privilege of driving yourself and putting others at risk. If you really want to drive yourself, you'll just have to skill up to get a license and proper training, get extra insurance for the increased liability, etc. And then if you prove to be unworthy of having a license after all, it will be taken away. Because it's a privilege and not a right to have one and others on the road will insist that you are competent to drive. And with all the autonomous and camera equipped cars, incompetent drivers will be really easy to spot and police.
It will take a while before we get there; this won't happen overnight. But that's where it's going. Most people will choose not to drive most of the time for financial reasons. Driving manually then becomes a luxury. Getting a license becomes optional, not a rite of passage that every teenager takes. Eventually, owning cars that enable manual driving will become more expensive or may not even be road legal in certain areas. Etc.
And of course around 80% involve youth, testosterone and horsepower in some combination. The rest are almost always weather or terrain related in some way. Massive pileups on the highway in the winter and upside down vehicles on waterways in the summer.
Very rarely does a fatal accident happen without several factors being present.
Hardly. We drive hundreds of billions of miles every month and trillions every year. In the US alone. You're more likely to die from each of the flu, diabetes or a stroke than a car accident.
If those don't get you, you are either going to get heart disease or cancer, or most likely, involve yourself in a fatal accident; which, will most likely be a fall of a roof or a ladder.
You would save more lives by outlawing motorcycles; however, it would just be the motorcyclists themselves.
Another thing people don't consider is that not all seats in a vehicle are equally safe. The drivers seat is the safest. Front passenger is less safe but still often twice as safe as sitting in the backseat. If you believe picking up your elderly parents and then escorting them in your backseat is safer than them driving alone you might be wrong. This is a fatality mode you easily recognize in the FARS data. Where do most people in a robotaxi sit?
Your biggest clear win would be building better pedestrian infrastructure and improving roadway lighting to reduce pedestrian deaths.
> Not a lot of 65yo people working 80hr weeks, slogging out 50-100mi commutes or plowing into moose while blinded by the 6am sun on their way back from 3rd shift.
People who drive in that state are one of two things: irresponsible or poor with no other choice.
Driving regularly while tired and sleep deprived is a big factor in accidents ... and that many people are somehow seeing it as heroship rather then being irresponsible is a cultural issue.
I 1000% agree with you, but unfortunately in some countries like the US that kind of argument leads to nowhere, because people think driving is a human right and also the entire country is built around having a car so you are actually truly screwed if you don't have one.
>> Autonomous driving removes the economic necessity of having one. Just get a proper car that can drive you to work.
Sure, except it doesn't exist and I honestly doubt it ever(in the next 50-100 years) will. If you need autonomous driving that takes you to your destination that already exists though - it's called a taxi.
The real question is why tip on either of those? You pay through the app, the driver is compensated for their time, why tip extra? If you feel that Uber/Lyft are mistreating their drivers, stop using their service, not pay them on the side?
Producing more small things isn't usually more efficient that fewer equivalent large things. You can't just will some "pods" into existence that are magically cheaper (per person!) than trams, trains and busses. Also, once you have a system running capex and opex aren't that different - replacing a set number of vehicles per year is pretty much the same thing as operating expenses.
> 5. Housing changes: Garages transform into rooms. People ditch bulky refrigerators in favor of ordering drone-delivered fresh produce in minutes. Drones deliver upstairs not just at street level. Pods become elevators. We've seen all this in science fiction... guess what the enabling technology is?
My prediction is that no one will ever be fine with the amount of noise a "drone" (read helicopter) makes, especially as a replacement for the very noise-free and orders of magnitude more efficient elevators we have right now.
[Edit]: found the policy: death: £2,500 arm or leg: £2,000 blindness in one or both eyes: £2,000
There's a good chance the driver will zoom past everything else, weaving between lanes accordingly, and you'll wish you were one of the slow vehicles. Although I'd be less concerned if the seatbelts worked.
"Grok is aspirationally a maximally truth-seeking ai, even if that truth is like politically incorrect”
Meanwhile, he deletes your account if you offend him
> it cost twice as much as an Uber
Surely incidental since the typical price per ride is about the same. Generally though, the relationship between the cost to operate a service profitably and the price presented to the user is very complex, so just because the price happens to be x right now doesn't tell you much. For example, something like 30% of the price of an iPhone is markup.
> while having a longer wait time for a car
Obviously incidental?
> It also couldn’t operate on the highway so the transit time was nearly double.
Obviously easily fixable?
> One shouldn’t underestimate how economical real human operators are.
There's nothing to underestimate, human drivers don't scale the way software drivers do. It doesn't matter how little humans cost, they are competing with software that can be copied for free.
> Waymo can’t share the business expense of their vehicles with their employees
They can share parking space, cleaning services, maintenance, parts for repair, etc.
> I’m sure it’ll improve but this tells me that Waymo’s price per vehicle including all the R&D expenses must be astronomical.
Obviously, they're in the development phase. None of this matters long term.
> They are burning $2 billion a year at the current rate even though they have revenue service.
"The stock market went up 2% yesterday so it will go up 2% today too and every day after that."
> Plus, they actually have a lot of human operators to correct issues and talk to police and things like that.
Said operators are shared between all vehicles and their number will go down over time as the driving software improves.
---
To sum up, every single part of what Waymo is trying to do scales. Every problem you've mentioned is either incidental or a one-off cost long term.
The real risk is the opposite, cars bunched together at the same speed. This is where pileups occur, somebody at the front does something stupid and the people at the back end up colliding.
If you have to reach that hard to make your point, it's not a great point.
Adding to the sibling's statistic of 40k deaths a year:
> Motor vehicle crashes were the leading cause of death for children and adolescents, representing 20% of all deaths.
Generally you have to do a lot to get banned for life - remember Germany is run by car lobbies, they are not interested in banning people from driving,
No. It's 1.25 per 10,000 per capita. Most people understand the risk ahead of time and yet still choose to drive. They clearly don't think it is.
> It's 13x as many people as died in 9/11
And 50x 9/11 many people die of accidental self inflicted injury. This is an absurd metric.
Joke aside, this is an interesting article. I wonder what the chances would be for a human driver to avoid the crashes in the circumstances described in the article (if at all). Clearly autonomous vehicles are passive by design in those situations.
> Self-driving vehicles give us the opportunity to rethink almost all of our physical infrastructure and create way more human-friendly cities.
Ok, that's just giving me a stroke. We already have that. It's called public transport, walkability, bikeability. These have the upside of being extremely well understood and use technology that's available today. We could start seeing benefits within a few years, not decades.
Even in your dream scenario, 50 or so years from now, cars would still have a lot of the same downsides they have today of using way too much space and causing way too much pollution per person for the utility they provide.
I normally ask what the error rate on the humans doing that task are. It's never 0%. So the questions then come: can we beat that number, can we iterate on it, can we improve it in a logical way?
Humans have the benefit of being able to learn by example, and once something is explained the error rate falls (for a while, until they forget), so can we show the same mechanism in AI?
Quite often people will look for "a system" to do a role. I talk about "a process". I'm interested in how we continuously improve that process, which is much easier with a computer that can be trained than a human workforce.
This takes adjustment from leaders in organisation because they realise that an AI being introduced into a role isn't like Office or Photoshop where you just buy the thing and now you have a license and you're up and running: it's an investment in a sort of very cheap member of staff who can perform at helping a very expensive member of staff improve performance, accuracy and consistency. Once you've proven out that it's working at/better than the bar of the human, you then get to scale for less money than scaling the humans.
A lot of my meetings use the metaphor "we're not trying to build a robot that automates everything, we're trying to give your highly skilled workforce better tools to get more done, more safely, more accurately, and for less money". Some people get it, some don't.
Waymo is doing this, but with a much higher level of automation by removing drivers - if they can beat humans on safety and consistency, and reduce the workforce to monitors (each watching even 2 cars, perhaps many more), you've massively changed the economics of private hire transport.
In the same way I don't need to outrun the bear that's chasing us (I only need to outrun you), I don't need a perfect AI system that is flawless, I just need something that's better than you.
And, as per some other comments here, I find it interesting that people are showing "flaws" in these systems (walls painted like a road), that would fool humans too.
Years ago someone asked what would happen if they hailed a self-driving car to pick them up and then got on a train, to see if the car then tried to follow you to pick you up. They suggested this was a "hack" in the logic. But I wonder: would you do this to a human driver? What would you expect them to do? Follow you? Not follow you? Why are you trolling cars and their drivers?
The whole debate around this stuff just needs to grow up, frankly.
So they price themselves out.
Of course, they may then decide not to have insurance at all. In most countries that is illegal and doing that in a premeditated way is criminality and something else entirely.
Not sure if insurance is mandatory in the US or not - I assume instead you just get into a gunfight with the other party instead?/s
Also, a very significant portion of drivers overestimate their driving skills, in particular older drivers. Having only been scared once in 20 years would likely make someone lenient and dull their senses as nothing requiring notable effort or attention ever seems to happen to them.
SFMTA article: San Francisco Adopts Demand-Responsive Pricing Program to Make Parking Easier
Ref: https://www.sfmta.com/blog/san-francisco-adopts-demand-respo...
Some nice analyses of the most expensive places to park in SF with demand responsive pricing:
https://www.sfgate.com/local/article/sf-most-expensive-parki...
https://sfstandard.com/2023/10/01/parking-meter-san-francisc...
But for whatever reason, it seems such people end up with far lower (yet still expensive) insurance quotes at more like $4k/year.
Government generally budgets deaths at $3M-$30M per person killed. Yet a car accident that kills someone usually doesn't result in any payout at all.
That in turn means insurance companies are offering risky people lower rates than economists would suggest for the societal cost/risk.
> If you're under 25 and are on your red Ps, you must not drive with more than one passenger who is under 21 between 11pm and 5am.
(Red Ps means the first year of being able to drive unsupervised)
Nice Freudian slip there.
Rich western europe has less car accidents because they, broadly speaking, don't let poor people drive and work harder to cultivate a law abiding populace.
I bet the actual payouts to families are similar for normal deaths that don't result in a media spectacle and the court of public opinion being involved.
And an economic/tax policy issue. Some increases in size are due to legally mandated safety features, while even more of the increased adoption of SUVs in the US is indirectly due to the CAFE standards.
A simple example. I was coming out of a business driveway, turning left onto a two lane road. It was dark out with no nearby street lights. There was a car approaching from the left. FSD could see that a car was coming. However, from the view of a camera, it was just a ball of light. There was no reasonable way the camera could discern the distance given the brightness of the headlights. I suspected this was the case and was prepared to intervene, but left FSD on to see how it would respond. Predictably, it attempted to pull out in front of the car and risked a collision.
That kind of thing simply can not be allowed to happen with a truly autonomous vehicle and would never happen with lidar.
Hell, just this morning on my way to work FSD was going run a flashing red light. It's probably 95% accurate with flashing reds, but that needs to be 100%. That being said, my understanding is the current model being trained has better temporal understanding such that flashing lights will be more comprehensible to the system. We'll see.
Willing to bet this is not true.
They needed Karen's support to get the whole thing passed so they added a "and we won't let them drive after dark" clause to get it.
With numbers like that you're fundamentally running against the people's willingness to comply (which includes the cop's willingness to enforce).
> you have to supply hair samples
That seems like an idea that could be useful, over here, but we have a pretty strong sin lobby, so it's unlikely to happen.
It's where a bunch of cycling nutters (I'm one of them) post local news stories where a driver has crashed into a building ("It wasn't wearing hi-viz!")
Isn't it rather saying that you're not experienced enough to do this. Speaking only for myself, I passed my driving test no problem and after a couple of month of driving I thought I was a great driver. Yet looking back now with the benefit of experience I know for a fact I did some really stupid things that first year of driving and it was only luck rather skill that led to me not getting into an accident.
> Approximately 1.19 million people die each year as a result of road traffic crashes.
> Road traffic injuries are the leading cause of death for children and young adults aged 5–29 years.
Falls from a ladder/roof do not come close to that as far as I've been able to find. They'd be a subset of falls from a height, which is a small subset of unintentional falls/slips, which is still globally under road accident deaths.
It's true that diabetes, strokes, heart disease, flu, etc. do cause more deaths, but we're really into the absolute biggest causes of death here. Killing fewer than strokes is the lowest of low bars.
I think there's also the argument to be made in terms of years of life lost/saved. If you prevent a road accident fatality, chances are that person will go on to live many more healthy years/decades. If you prevent a death by stroke, flu, or even an at-home fall, there is a greater chance that person is already in poor health (to have potentially died from that cause) and may only be gaining a few extra months.
While I like watching those videos I suspect a fair share of them has a deeper explanation than "being an idiot”. But it’s a lot less fun to watch when you imagine the guy driving may be in a desperate position.
Btw the meaning of idiot is “someone ignorant". As contextless external watchers of a crash, the real idiots are probably you and me, the YouTube watchers.
1. If the self-driving software chooses to disengage 60 seconds before it detects an anomaly and then crashes while technically not in self-driving mode, is that a fault of the software or human backup driver? This is a problem especially with Tesla, which will disengage and let the human takeover.
2. When Waymo claims to have driven X million "rider only" miles, is that because the majority of miles are on a highway which are easy to drive with cruise control? If only 1 mile of a trip is on the end-to-end "hard parts" that require a human for getting in and out tight city streets and parking lots, while 10 miles are on the highway, it is easy to rack up "rider only" miles. But those trips are not representative of true self driving trips.
3. Selective bias. Waymo only operates in 3-4 cities and only in chosen weather conditions? It’s easy to rack up impressive safety stats when you avoid places with harsh weather, poor signage, or complicated street patterns. But that’s not representative of real-world driving conditions most people encounter daily.
The NTSB should force them to release all of the raw data so we can do our own analysis. I would compare only full self-driving trips, end on end, on days with good weather, in the 3-4 cities that Waymo operates and then see how much better they fare.
1. Waymo is autonomous 100% of the time. It is not possible for a human to actually drive the car: even if you dial in support, all they can do is pick from various routes suggested by the car.
2. No, I'd guesstimate 90%+ of Waymo's mileage is city driving. Waymo in SF operates exclusively on city streets, it doesn't use the highways at all. In Phoenix, they do operate on freeways, but this only started in 2024.
3. Phoenix is driving in easy mode, but San Francisco is emphatically not. Weatherwise there are worse places, but SF drivers need to contend with fog and rain, hilly streets, street parking, a messy grid with diagonal and one-way streets, lots of mentally ill and/or drugged up people doing completely unpredictable shit in the streets, etc.
If you think FSD is garbage then you’ve clearly never used it recently. It routinely drives me absolutely everywhere, including parking, without me touching the wheel once. Tesla’s approach to self driving is significantly more scalable and practical than waymo, and the forever repeated misleading and tired arguments saying otherwise really confuse me, since they’re simply not founded in reality
If the current state of commercially available ADAS was dramatically reducing accident rates, then Teslas etc would have lower insurance rates. And yet they instead have higher insurance rates.
In the UK you get a minimum 12 month ban, an unlimited fine (which are based on income and have been quite big in the past (Dec of Ant and Dec got an £86000 fine). I don't think this approach is uncommon in Europe.
https://waymo.com/blog/2024/05/fleet-response/
It's possible to put the car in manual mode, but that requires a human behind the wheel.
I have a Tesla myself, and while it's a great car, it's a long, long way from actual autonomous driving and their own stats bear this out: it can manage 12-13 miles without driver interruption, while Waymo is clocking ~17,000. Hell, where I live, Autopilot can barely stay in lane.
In such a case, they might not be considered legally at fault, but they would still be, in practical terms, a significant cause of the crash.
...Once autonomous cars can go everywhere human-driven cars can, in all the conditions humans can drive in.
Remember that Waymo is still very restrictive in where they choose to operate.
Tesla's have had higher insurance for 5+ years. Some of which is due to wait times on repairs, costs of repairs, and also accident rates.
A private chef sounds good to me; having to go and collect specially marked "safe" meals at the supermarket with a card that's only given to adults the state deems incapable of looking after themselves, not so much.
Or, when you do have an accident it's typically more expensive to repair.
No need. Waymo releases raw data on crashes voluntarily: https://waymo.com/safety/impact/#downloads
They also compare with human drivers only in places they operate and take into account driving conditions. For example, they exclude highway crashes in the human benchmarks because Waymo does not operate on highways yet.
Waymo is open about their comparison methodology and it would be helpful to read it (in the same link above) instead of assuming bad faith by default.
Tesla, on the other hand, is a completely different story.
So yeah, it's all about "not being mature enough".
1. Go as fast as possible without getting fines, violating speed limits whenever it's very likely to not be fined, doing maximum acceleration as needed (the latter configurable by the latter)
2. When there's congestion on the lane they need to take, take a free lane instead and then merge into the correct lane at the last possible opportunity, effectively skipping the queue
3. Run red lights when it can determine there is no enforcement camera on the traffic light, no police and no traffic
4. Aggressively do not yield to pedestrians unless unavoidable on crosswalks, swerving on the lane going the opposite direction as needed if pedestrians are on the side the vehicle is in
5. Aggressively pass slower drivers using opposite-direction lanes even when forbidden as long as the software can determine that it can reenter the lane before colliding with incoming traffic
6. Use barred parts of the road including sidewalks to bypass traffic when it's feasible to do so
7. Aggressively flash lights and tailgate on highways when on the fastest lane but behind a slower vehicle
8. When an emergency vehicle passes by, follow it closely to take advantage of its right of way
9. Aggressively do U-turns even when forbidden if it is determined to be possible
10. Ignore stop signs when it can see there is no traffic, and when it can't determine that plan to do maximal braking at the last moment if it sees any (the maximum braking needs to be rider configurable)
not that it can drive safer, but make things safer for everyone?
If the crashes it was involved in where the result of the waymo following the rules… how is that any better?
Less crashes, not it crashes less. Isn’t that the real goal?
Just doing this until out of gas sounds like the easiest way to satisfy these constraints.
I wasn't going to comment, but I couldn't get this image out of my head nor stop laughing.
Well, until I drove a Dodge Charger R/T for a week. I could get there in 15 minutes. It had insanely good handling, amazing braking (enabling more aggressive driving), and absurd acceleration and handling at high speeds.
I concluded that was the last thing I needed and I drive a 50mpg Beetle now.
The evidence so far is that they are throttling demand by keeping the prices above that of an Uber. It's definitely still an experiment. If the experiment is successful, expect to see more cities and more vehicles in each city in expanding service areas.
There are step changes that have to be made to keep waymo expanding. The tariff situation is blocking plans to have dedicated vehicles from China. That has to get sorted out. The exact shape of the business model is still experimental.
Of course it's got to be safe. But there are dozens of dull details that all have to work between now and having a profitable business. The best indicator of a plausible success is that Waymo appears to be competent at managing these details. So far anyway.
Seriously, don't make these statements until you have data against drivers in Toronto or Chicago or Boston or NYC. Humans in snow, freezing rain, ice, and thick fog still wins against your AI. Show me the data stating otherwise or address the data cherry pick.
With a valid employment reason (such as snow plow operator) you can get an employment only permit. Your insurance will easily be $1000 a month just for basic liability. I’ve known a few guys in this situation.
The bigger problem is people who are judgment proof and don’t mind spending some time in jail. They just drive drunk over and over and don’t care if their car (which is often a relative’s) gets impounded. They have no valid licence and no insurance. Short of permanent incarceration, there isn’t much they can be done about such people.
Driving should not be a privilege exclusively for rich people. Poor people cannot afford to pay an Uber to drive them around and can’t afford to buy some Tesla with FSD either. Waymo would be grossly unaffordable for a 120 mile daily round trip commute.
In Australia I met people with even longer commutes - going 150km to get to a job, mostly due to how unaffordable housing has become.
If you want to take away people’s cars, you need to make sure they can access employment and have affordable, safe housing. Remember that half the population makes less than the median income.
It does happen on occasion. Seasonally, sublimating snow banks can create fog that intense for hours if conditions are right. Also heavy smoke can create similar conditions.
Good luck if such a person hits you; they’ll simply drive off. Recently a friend of mine had a fender bender with someone else, most likely his fault. That person didn’t have a valid registration or insurance and wasn’t at fault but begged to just go without calling the police. My friend handed them the cash out of his pocket since he felt bad for damaging their car, but they did NOT want to see the police.
The only way to enforce not having expired tags/no licence/no insurance is strict police enforcement. A lot of Americans don’t like that and so police agencies end up being lenient, preferring to focus on more violent crimes instead of just trying to pull every car with expired tags over.
But, like curing a dread disease, it's often a long, difficult grind and not something that will for sure work by the end of this year for the last 10 years. No pharma company would get away with that hype.
I know some really bad drivers that have almost no 'accidents', but have caused/nearly caused many. The cut off others, get confused in traffic and make wrong decisions etc...
Waymos, by media attention at least, have a habit of confusion and other behaviour that is highly undesired (one example going around a roundabout constantly) but that doesn't qualify as a 'crash'.
A driver -- legally, logically, practically -- should always maintain a safe following distance from the vehicle in front of them so that they can stop safely. It doesn't matter if the vehicle in front of them suddenly slams on the brakes because a child or plastic bag jumped in front of them, because they suddenly realized they need to make a left turn, or mixed up the pedals.
"oh you hit a mailbox during an ice storm that we paid out $50 for after your deductible, that'll be a $400/6mo increase in premiums for the next five years"
Lane splitting, driving 100mph when there’s enough space to do so, and generally being a maniac can get you places pretty quick. It can also pretty quickly make you dead. I survived 8 years of this commuting but I’d never do it again.
Well, yeah? Or rather, if it's not, then I think the burden of proof is on the person making that argument.
Even taking your complaints at their maximum impact: would you rather be delayed by a thousand confused robots or run over by one certain human?
I experience Waymo cars pretty much every time I drive somewhere in San Francisco (somewhat frequent since I live there). Out of hundreds of encounters with cars I can only think of a single instance where I thought, "what is that things doing?!"... And in that case it was a very busy 5 way intersection where most of the human driven cars were themselves violating some rule trying to get around turning vehicles and such. When I need a ride, I can also say I'm only using Waymo unless I going somewhere like the airport where they don't go; my experience in this regard is I feel much more secure in the Waymo than with Lyft or Uber.
....Europe....
Depending on the relative rates and costs of each type of mishap it could go either way. There is a crossover point somewhere.
The fact that you're coming right out the gate with a false dichotomy and appeal to emotion on top tells me that deep down you know this.
No, it’s created by and maintained by humans. You’re shifting the cost of a driver to software engineers, data analysis, people mapping out roads, etc.
This is why Uber doesn’t make any money, despite being more expensive for the customer as compared to traditional taxi services. Coordinating Ubers across the country costs a lot of servers and a lot of engineers. Sure, the system is automatic - maintaining it isn’t.
So you end up with a lose-lose-lose scenario. The ride is more expensive for the customer. The driver makes less money. And Uber bleeds hundreds of millions a year.
Technology is neat, yes, but often we don’t stop and think “wait… does this make sense?”
We don’t know if autonomous cars make any economic sense. They could end up not. It doesn’t help that 99% of tech companies in the transportation space are just making trains with extra steps. Like, guys - have we even done feasibility analysis?
For comparison, to get a similar penalty by speeding you would have to exceed the speed limit by 51 km/h (32 mph).
There are many additional related offences you could commit, with different consequences. Repeat offences to the above, for example, are punished more severely: you get 3 months instead of 1 and the fine is doubled and tripled for the second and third offence, respectively. Already with a blood alcohol level of 0.03% you risk legal consequences, e.g. if you make an error while driving. If you endanger someone else (or property) with that level you are committing a crime, will lose your license, and can go to prison. If you are in your probationary period (two years after acquiring your license), any nonzero level is an offence.
Losing your license is generally temporary. You are blocked from re-acquiring it for some time, depending on the offence (at least 6 months, but can be multiple years). You have to complete an MPU, which certifies your ability to safely drive. For alcohol based offences, this would include demonstrating that you have reduced your consumption significantly. This can be quite harsh; you may, for example, be required to show complete abstinence for a period of one year. Of course, you are also looking at costs close to $1000 for the MPU alone. It is possible to get permanently blocked from driving, but it's quite difficult, I believe.
Sure, the current system won't make those bad drivers pay up for that behavior, but dumping a bunch of them onto the roads is a net negative overall.
* a crash with a fatality
* a crash with an injury
* any crash at all
* a driverless car going around a roundabout constantly
for me, the answer is pretty clear: crashes per distance traveled remains the most important metric.
Obligatory “almost nobody in the US chooses to drive” comment.
Driving in the US is a lifeline. It’s closer to food and shelter than a product or action. Remaining economically afloat in the US without a car is extraordinarily difficult. Many people, especially poor people, would much rather lose their job or health insurance than their car.
Sorry if you're having a car crash every 6 months or less, you shouldn't have a license.
Driving a car is privilege granted to you by your state, and this state is negligent in its protection of everyone else by letting this idiot continue to drive. Sell your car, take the bus, move closer to work, I don't care.
More than 3 at-fault crashes in a year or more than 10 at-fault crashes ever and you should permanently lose your license forever. That seems more than generous enough.
I'd immediately donate money to and vote for any politician stupid enough to say we should revoke licenses from the worst 1% of drivers.
Revoke their licenses, let them figure it out. Get a ride from friends. Take the bus. Move closer to work. You're a danger.
If they break the law and drive anyway, put them in jail.
Not crashing yourself is not enough. You could accomplish that by driving 10mph always, but you'd be causing accidents all around you.
But agreed on your first part. When I actively rush to get home, but have the car I passed 10 mins ago show up behind me at a stoplight, it makes me realize it’s not worth it.
I view traffic as a form of a packet delivery system with a bit of time tolerance in either direction. Trying to rush through is fruitless and dangerous.
But you saying you cut your drive by 50% makes me question everything! Is it a busy commute?
So if we're just measuring how many crashes the robot has been involved in, we can't account for how many crashes the robot indirectly caused.
And I repeat, that's a contrived enough scenario that I think you need to come to the table with numbers and evidence if you want to make it. Counting crashes has been The Way Transportation Safety Has Been Done for the better part of a century now and you don't just change methodology midstream because you're afraid of the Robot Overlord in the driver's seat.
Science always has a place at the table. Ludditism does not.
Actual traffic enforcement does not seem to produce this result. This woman is fairly famous on Reddit for her erratic driving, and was reported in 2019 as having been involved in 31 crashes since 2000: https://www.wral.com/story/lawyer-stayumbl-driver-a-victim-o...
She is still driving (with a new license plate after 2019): https://old.reddit.com/r/bullcity/comments/1ji3y82/jesusdos_...
But if there's an existing system and culture of driving that has certain expectations built up over a century+ of collective behavior, and then you drop into that culture a new element that systematically brakes more suddenly and unexpectedly, regardless of whether the human drivers were doing the right thing beforehand, it is both reasonable and accurate to say that the introduction of the self-driving cars contributed significantly to the increase in crashes.
If they become ubiquitous, and retain this pattern, then over time, drivers will learn it. But it will take years—probably decades—and cause increased crashes due to this pattern during that time (assuming, again, that the pattern itself remains).
But traffic is heavy and you have no room to, for example, turn right.
So it's common practice to advance, little by little, until you're basically an obstacle and some other driver has no alternative than letting you in.
Otherwise, you'll spend much time waiting. To make thigns worse, the drivers behind you won't take this unending wait lightly either, so they'd get nervous and try to overtake you and all things get messsy.
How do these systems face this situation?
Just because this car doesn't crash, that doesn't mean it doesn't cause crashes (with fatalities, injuries, or just property damage), and that's inherently much harder to measure.
You can only develop an effective heuristic function if you are actually taking into account all the meaningful inputs.
My question is open in that we don't really HAVE data to measure that statement in any meaningful way. The proper response is "that could be valid, we need to find a way to measure it".
Resorting to calling me a luddite because I question whether a metric is really an accurate measure of success (one that I apply to HUMAN drivers as an example first...) really doesn't reflect any sort of scientific approach or method I'm aware of, but feel free to point me to references.
German drivers are full of contradictions (me too), it's quite heartwarming :D
I live in sf. Waymos are far more predictable and less reckless than the meatwagons. They do not cause accidents with their occasionally odd behavior.
And to add another perspective - as a cyclist and pedestrian I put waymos even further ahead. I have had crashes due to misbehavior of cars - specifically poor lane keeping around curves - but waymos just don’t cause those sorts of problems
This doesn't seem to make sense. Surely a tram is still closer to it than a train. You're comparing the cost of trams to every home vs...the current number of trains? Why make that comparison?
Anecdotal of course but within my circle people are becoming Waymo first over other options almost entirely because of the better experience and perceived better driving. And parents in my circle also trust that a waymo won't mow them down in a crosswalk. Which is more than you can say for many drivers in SF.
Of course that means you don't have the data to compare Waymos with NYC drivers. Yet.
So instead, I want you to imagine the sea or a very large bay, the most wide open space available, then I want you to imagine a ferry that can carry 500 people, big yes but there's still plenty of sea and its reasonably tranquil.
Now imagine 500 people on jet skis roaring around, then imagine that the jet skis and ferry are all trying to get somewhere different but that's relatively the same, perhaps commuting between the different sides of the bay.
If I was mayor I would put a stop to 500 jet skis and say look you have to use the ferry, people on jet skis keep colliding with each other, the noise is horrendous for people on the beach and makes swimming dangerous, and it's also wildly power inefficient when you step back - that's even if we ignore the pollution! 1000 spots to store a 500 jet skis on both sides of the bay is perhaps even worse!
If you can make a sea packed and grid locked with just 500 just imagine what it does to a city with thousands, hundreds of thousands, if you then turned to me and said the jet skis drive themselves! I would still think most people should be taking the ferry and there's a upper limit to sustainable jet ski use.
The result is too long to post here but here's a sample
"Chad Loder - Suspended November 2022. A left-wing activist identifying January 6 participants, Loder was banned after Musk reportedly pressured X’s trust and safety head, per Bloomberg. The content—exposing far-right figures Musk has since aligned with—may have clashed with his views, though no public Musk comment confirms this."
Although my brain can't help but see waymo's more as "Meatwagons" than human driven cars, I get your point :)
It would be curious to see relative levels of driver assist and its impacts on things like that outside of crash and injury statistics from crashes, but it would be very hard to measure and quantify.
Community tracking shows 2600+ miles between critical disengagements in California, where the mapping is probably the best (if we're going to make a fair comparison to Waymo). Most recent firmware shows 98% of trips have no disengagement at all in California, too. If you made the operating zones extremely tight like Waymo, I'm sure it'd do even better.
Your link states this:
> In the most ambiguous situations ...[it] requests [to humans] to optimize the driving path. [Humans] can influence the Waymo's path, whether indirectly through indicating lane closures, explicitly requesting the AV use a particular lane, or, in the most complex scenarios, explicitly proposing a path for the vehicle to consider.
It's literally a human drawing a line on the map telling the car where to go, in the most manual of ways. It's not an xbox steering wheel and driving remotely, but it's absolutely the same concept with a different interface, including a remote brake button.
However, the government still has to do its part and actually enforce insurance requirements.
My pet hypothesis is that there is a tipping point where the feedback loop between driver safety, ai advancements, and insurance costs will doom manually driven cars faster than most people think.
Additionally, Waymo's most recent quartlery report for California lists over 1300 incidents that had mandatory reporting by law. This includes 47 collisions, 40 of them being with another vehicle, 1 against a pedestrian, and 2 against bicycles:
https://www.cpuc.ca.gov/regulatory-services/licensing/transp...
That's for a single quarter, in very small sections of the state. If you made them operate globally like Tesla, there's be thousands of these.
When customer support is 99% automated with the remaining 1% handled by remote humans it is often economically and societally the same as 100% automation.
The same applies to self-driving cars. If there's 99% automation and humans can handle the remaining 1% remotely, and if that combined system (including the handoff process) is safer in aggregate than a standard human driver, then you basically don't need drivers anymore.
You don't actually need 100% AI-only automation to make dramatic economic and societal difference. You just need 100% coverage through the human-AI combination.
Yep, when you get down to this root fact, it's nearly impossibly to _actually_ stop someone from driving a car. If you make insurance mandatory, they will still not buy insurance. If you revoke their license, they will keep driving without it. If you fine them, they just won't pay. If they go to jail for it, they'll resume driving when they get out.
They are going to drive anyway, because in most of the USA, you need a car to get basically anywhere, including to work. So now instead of just being a bad driver, they're also unemployed and sitting in jail, which taxpayers are paying for. There are people with dozens of DUIs, totally uninsurable, their licenses pretty much permanently revoked, and they still drive every day.
Unless you have a specific claim and source for your claim?
Waymo’s crashes that I’ve looked at have just been fairly typical someone else is blatantly at fault no unusual behavior on Waymo’s part. So while it’s possible such a thing exists it’s not common enough to matter here.
It's mandatory and requiring proof when you register your car. Your insurer also has a line to the DMV (car registration government) to say, "FYI this guy is not insured" and the DMV gets mad.
It's a known problem, particularly with undocumented peoples, that they are often uninsured. California studied the issue: https://www.insurance.ca.gov/01-consumers/105-type/95-guides...
In the report California states up to 13% of the US residents of some areas do not happen to possess documentation documenting their legality of being in the US. Often they came from countries with no insurance requirement, so they are unaware of American culture and policies in this regard. The report also states 10% of drivers are uninsured. I'm not sure why the DMV isn't getting mad in this case, being informed the car is not insured. So it's "mandatory" but 10% of drivers are not insured. Similar to how in California retail theft is technically "illegal" but a lot of people will do that without consequences. Honestly if you ask me we need to be waiving the insurance requirement for cultural reasons and take a verbal Spanish-first policy to help accommodate people who have undocumented English skills or are without documentation of being literate.
Each point lasts for 3 years, and if you accumulate more than 8 you lose your license for 6 months.
A speeding ticket is at least two points, and running a red light or tailgating is three for example. You get double points the first two years after getting your license.
[1]: https://www.vegvesen.no/en/driving-licences/driving-licence-...
I don’t agree with making driving something only the wealthy do, though.
City driving is very chaotic. Though speeds tend to be lower so likely accidents would be just fender benders. They don't operate on freeways.
This would probably just cause more uninsured drivers. For California, that's around 17% [1]!
[1] https://www.iii.org/fact-statistic/facts-statistics-uninsure...
While tailgating is tiny slice of fatal collisions -- something like 2% -- it accounts for like 1/3 of non-fatal collisions.
We're already basically at Peak Tailgating Collisions, without self-driving cars, and I'd happily put a tenner on rear-end collisions going down with self-driving cars because, even if they stop suddenly more often, at least they don't tailgate.
And it's entirely self-inflicted! You can just not tailgate; it's not even like tailgating let's you go faster, it just lets you go the exact same speed 200 feet down the road.
The only solution to that is probably to only let self driving cars onto the road, in an all-or-nothing solution.
That's not telling you what you think it is. A lot of those deaths are that person in a car on their own. Usually involving drugs or alcohol. It intentionally folds in "deaths caused by others" and "death caused by self" into the same category. It's not an appropriate statistic to base policy on.
> If you prevent a road accident fatality, chances are that person will go on to live many more healthy years/decades.
Chances are that person is going to kill themselves in a vehicle again later as you have failed to examine MODE of accident. Your analysis is entirely wrong.
I've only been in a handful of Waymo rides, but in each case it's been about half the price of an Uber.
The majority of those people who had their lives cut short cut it short themselves and didn't take anyone with them.
Likewise, that 40k includes 6k pedestrians and 6k motorcyclists.
You can't just take the 40,000 figure and do _anything_ with it because there are so many peculiar modes of accidents which /dominate/ that data set.
If you're including disengagements in the 1300 "incidents", then it's highly misleading. As you said, it's only 47 collisions over millions of miles that also includes collisions in manual mode during testing. If you look at the collision reports [1], most of them are Waymos getting rear ended while being stationary. Remember, they have to report every contact event, including minor contacts like debris hitting their cars [2].
Tesla likely has orders of magnitude more incidents. The thing with them is that they don't report any of these numbers. Tesla doesn't even count crashes in their (highly misleading) safety report that don't deploy airbags.
[1] https://www.dmv.ca.gov/portal/vehicle-industry-services/auto...
I am willing to experiment in many ways with things in my life, but not WITH my life.
https://www.iihs.org/news/detail/new-crash-test-spotlights-l...
There are _so many_ bad assumptions about vehicle safety it honestly drives me nuts. Especially on Hacker News. The data is available from NHTSA in a database called FARS. I encourage everyone to go look through the data. You almost certainly believe several wrong things about driving and fatalities.
I think Elon Musk is exceptionally irresponsible for using these statistics in a flatly dishonest and misleading way. He wants to sell vehicles not truly educate you about safety. People should double check.
This isn't a fair comparison either. FSD is used a lot on highways where crash rates are lower (and hence disengagements will be too). Waymo doesn't go on highways yet and can already go 17k miles without intervention (with a safety driver) in places harder than SF and LA where they're already driverless.
> It's not an xbox steering wheel and driving remotely, but it's absolutely the same concept with a different interface, including a remote brake button.
How do you know they have a "remote brake button"? Waymo's blog makes no mention of any such thing. They categorically say remote operators have no control over the vehicle.
I think you're deliberately trying to mislead with your comments here by slipping in something false with known facts.
> However, the government still has to do its part and actually enforce insurance requirements.
Honestly, arrest them. Someone willing to operate a motor vehicle without a license is one step away from manslaughter.
Technology famously has a linear adoption curve, and convenience is famously not something that drives adoption /s
> We already have that. It's called public transport, walkability, bikeability.
Do we have that though? In the US, mostly not. So what's the path? Hoping that sprawled out cities somehow magically get the political will to build $billions in light rail? What do you think is the path of least resistance to these goal states?
> Even in your dream scenario, 50 or so years from now, cars would still have a lot of the same downsides they have today of using way too much space and causing way too much pollution per person for the utility they provide.
Read other comments, don't get stuck on the notion of 'cars' as-is.
I suspect there are plenty of undocumented immigrants in states that don't have the equivalent of AB 60 licenses who are perfectly safe drivers. Perhaps even safer than licensed drivers, since they have more to lose from moving violations.
Sure - going off of NHTSA figures it looks around 35%. There's also a lot of car passenger deaths (~15%), pedestrian deaths (~20%), and deaths of car drivers with passengers (~15%).
Not entirely sure the point of breaking it out like this, though. These are all still deaths that self-driving cars could in theory prevent, and so all seem appropriate to consider and base policy on.
> Chances are that person is going to kill themselves in a vehicle again later [...]
Unsafe drivers (under the influence, distracted, etc.) are disproportionately represented in fatalities, but that neither means most road accident fatalities are unsafe drivers nor that most unsafe drivers will have a fatal car crash. As far as I can tell, even a driver using amphetamines (increasing risk of a fatal crash 5X) still isn't more likely than not to die in a car crash (a very high bar).
Further, if the way the initial fatal crash was prevented was by prevalence of safe autonomous vehicles, the future crashes would also be similarly mitigated.
It seems like Volvo's reputation as one of the safest car is still well deserved after all. I don't own a Volvo--too expensive for me, but good to know.
As someone in the industry, I find the LiDAR discussion distracting from meaningful discussions about redundancy and testing
Forcing people to take public transit that is any worse than NYC subway will definitely and rightfully lose an election for that party. Building such a system at modern American construction costs will also lose an election. What is left to do but embrace autonomous driving as the first step toward retrofitting American cities to be slightly more people friendly?
Besides, this is the decadent west, we can afford for people to use more resources for more comfort. Even the well off in china have embraced cars as mobile living rooms.
Focusing on rollout, municipal light rail almost never gets deployed in US-style cities due to huge capex, not opex. Smaller vehicles allow incremental roll-out and can use preexisting road infrastructure. Ergo, that's the form of public transit you're most likely to see grow over the next decades.
Drone here doesn't imply flying, it's about scaling down wheeled vehicles and the coexistence of a wider variety of vehicle sizes on roads that is unlocked by the automated movement of mass. Delivery to higher up floors can be done through small in-building elevators. If you think that's unrealistic, consider that it was once extremely popular to use pneumatic tubes to send mail in buildings. Built infrastructure changes based on what is possible, and mass needs to move.
I think the best evidence of that is how uber/lyft has to use grey-ish patterns to get you to choose upmarket options. They don’t list the fares sorted by price or even list the options in a consistent order, they will strongly suggest upsells like comfort or black or whatever tier they think gives the best chance of convincing you to pay more than the bare minimum.
They also upsell faster pickup which I have to think is a way better value proposition than sitting in a nicer car temporarily.
There are a great number of examples where that’s not true. Cookie store chains like Crumbl are a really good example. All the economies of scale stuff with them backfires. The product is too low price and too simple to make in batches, so the businesses with the best margins are ones that avoid traditional brick and mortar rent and don’t hire employees.
In the same way, an uber or taxi’s labor cost seems like it’s a huge scaling problem that needs to be resolved but really think about the costs involved with creating that scale to replace them.
Let’s not forget that at Waymo they still need a human to clean, fix, and charge/gas up, interact with customers and police, resolve driving edge cases, etc, all costs that a human driver essentially includes with their pay and does for “free.” Then you’ve got car storage and the capital expense of the vehicle that the uber driver heavily subsidizes and splits between business and personal use.
Basically, Waymo is looking to compete using their very complex and sophisticated solution in a market where its competitors are hiring lowest bidder temporary contractors.
Preach.
I was coming home a few evenings ago in the dark, and both I and my passenger were getting continually aggravated by the car that was following too close behind us, with their headlights reflecting in the wing mirrors alternately into each of our faces.
They kept that up for at least 10 miles.
The moment someone suggests enforcement of a law someone comes running in yelling about how it's regressive and will disproportionately affect the poor, and by extension "only the wealthy" will be able to do whatever.
Everything disproportionately affects the poor because it's very hard to be poor.
And the moment you say we shouldn't enforce laws because it will make poor peoples' lives harder you are saying that something is no longer a privilege. That poor people should be able to break the law with lesser or no consequence because they are poor.
Uber drivers are already paid low wages and any price competition can lower their wages further.
Waymo has to pay for things that “come with” uber drivers: the cars, storage for the cars, employees to clean and maintain the cars, extra infrastructure to support the self driving cars like cellular data for each car, data centers, engineers, customer service to interact with police and resolve edge cases (will never go away). Waymo also has to pay all these people healthcare benefits and pay W2 payroll, not a thing for Uber.
Waymo is like a professional moving company competing on price with an army of lowest bidder independent contractors who already have a beat up graffiti van.
With all respect, no. You don't treat every possible hypothesis as potentially valid. That's conspiracy logic. Valid ideas are testable ones. If you're not measuring it, then the "proper" response is to keep silent, or propose an actual measurement.
And likewise a proper response is emphatically not to respond to an actual measurement within a commonly accepted paradigm (c.f. the linked headline above) with a response like "well this may not be measuring the right thing so I'm going to ignore it". That is pretty much the definition of ludditism.
If a collision occurs because of bad road/intersection design then it wasn't all that accidental after all - it was a statistical inevitability.
See also: https://en.wikipedia.org/wiki/Traffic_collision#Criticism_of...
HW4 Tesla stopped before the painting of a road https://futurism.com/someone-else-tested-tesla-crash-wall-pa...
Wrong, you consider and reject hypothesis if we're being specific about the scientific method. In this case, this is a testable question that could be measured but from common metrics isn't accurately measured for humans to compare to. There is no valid rejection of the hypothesis without more data.
The utility of the hypothesis and the work required is one of many things considered and another discussion.
But actually considering and discussing them IS the scientific and rational method. Your knee-jerk reactions are the closest thing to ludditism in this whole conversation.
>And likewise a proper response is emphatically not to respond to an actual measurement within a commonly accepted paradigm (c.f. the linked headline above) with a response like "well this may not be measuring the right thing so I'm going to ignore it". That is pretty much the definition of ludditism.
Again wrong. In almost every case the correct first questions are "are we measuring the right thing", again if we are talking about engineering and science, that's always valid and should ALWAYS be considered. I also never said we should IGNORE crashes, I asked if its the BEST metric for success on its own.
And for your third incorrect point
>That is pretty much the definition of ludditism.
Obviously missed my point in every posts, including the one above. Whether "crashes" is the best metric is being applied to humans and technology, there is no anti-technology going on here.
Your emotional reaction to someone questioning something you obviously care about seems to have shut down your logical brain. Take a deep breath and just stop digging.