Auto(nomous Vehicle)s
TLDR: AVs are very safe, but this is easier to see in the specific cases than in the statistics, which are complex and can be warped both for and against. Lidar is worth it (for now).
Prerequisites: None
Before we begin, I’d like to take a moment to borrow an idea from CGP Grey: let’s use the term “autos” to talk about self-driving cars, robotaxis, and other autonomous vehicles. It’s short and descriptive, and isn’t a huge jump from how the word is already used. For the sake of both brevity and to be the change I want to see in the world, I’ll be using it as the main word in this essay. (The word “robotaxi” also seems like a good word, but I’d like to reserve it for specifically referring to an auto that drives people around as part of a business.)
Fully autonomous1 vehicles have been involved in approximately 43 reported collisions per month in the USA since the government started requiring all incidents to be reported in June of 2021. In these collisions there were two with human2 fatalities:
January 2025: An elderly man, connected to a previous set of hit-and-runs, was speeding through the middle of San Francisco when he crashed and caused a 7-car pileup, killing a young man, a dog, and severely injuring several others. An unoccupied Waymo, stopped in traffic, was one of the vehicles that was hit. Waymo reports that the vehicle’s sensor logs say the man was driving at close to 100 mph (160kph).
September 2025: At 1:21am near Arizona State University, a Waymo (with its turn signal on) slowed down to yield to pedestrians crossing the road that it was planning to turn onto. An 18-year-old on a motorcycle behind the auto failed to slow down fast enough and rear-ended the auto, spinning off into the adjacent lane, where he was then hit by a human driver. The 19-year-old driver who hit the motorcyclist fled the scene, but Waymo was able to provide her license plates to the police and she soon turned herself in. The motorcyclist died in the hospital. Waymo was found to be not at fault.
Looking at just this data, one might be inclined to say that self-driving cars have never been responsible for killing anyone. But is this the whole picture? What about outside the USA? What about before 2021? When I search for “autonomous vehicle fatalities” the top hit is Craft Law Firm’s report, which Google summarizes as “Of the 3,979 autonomous vehicle incidents that occurred from 2019 to June 2024, there were 496 injuries and fatalities.” Going to the site gives the more-specific quote: “There have been 83 fatalities related to autonomous vehicle accidents…” Craft Law Firm is very clearly analyzing the same government database as I am. So what’s going on?
First, while there have been some close-calls outside the USA, there have been zero fatal crashes involving autos in the broader world, mostly due to America being the front-runner in autonomous vehicle development.
As for the 83 vs 0 discrepancy, we can get a better handle by considering the two fatal crashes that are listed on the site as examples:
March 2018: A woman was pushing a bicycle across an unlit 4-lane road (45mph speed limit) in Tempe Arizona. Uber was testing out an autonomous vehicle prototype that had a human driver behind the wheel who was supposed to be watching for dangers and taking control in case of emergency, but at the moment in question the driver was watching a TV show on her phone. The Uber struck the pedestrian, who later died in the hospital. Uber was ruled to be not criminally responsible for the crash, but the human driver was charged and plead guilty.
March 2018:3 A man driving a Tesla Model X on Highway 101 engaged the car’s traffic-aware cruise control and lane following “Autopilot” feature (distinct from “Full Self Driving”), which insists that the driver must watch for hazards and take over sometimes, and set the desired speed to ~70 mph. The driver was playing a game on his phone when the highway diverged in a slightly unusual way, confusing the software and causing the vehicle to accelerate into a concrete barrier, killing the driver. While Tesla fought the case in court for years, they eventually settled with the driver’s family.
These two cases (which, again, I didn’t pick), are both examples of human negligence in supervising vehicles that were not intended to be used in a fully autonomous way. Regardless of whether their hardware was good enough (we’ll get into that later), neither vehicle had software that was sufficiently robust and capable of handling edge-case situations. The companies knew this. It’s why Tesla insists that drivers keep their eyes on the road and ready to take over at all times when using Autopilot, and why Uber had a back-up safety driver in their prototype vehicle.
More broadly, there is a distinction between “advanced driver assistance systems” (ADAS), which are meant to have a human behind the wheel who is paying attention at all times, and systems like Waymo, which are full autos and theoretically capable of operating safely without any human driver. ADAS are extremely common these days, and because they rely on the driver paying attention, can easily be “the cause” of an accident. It is from these half-measure systems, like Teslas (and the Uber prototype), that we get the 83 fatalities. It is genuinely the case that a fully autonomous driving system — one that has been engineered to a standard where it does not depend on human oversight — has never been responsible for someone’s death.
Safety Stats
But what about non-fatal crashes? Just because autos haven’t really killed anyone yet doesn’t mean they’re safe. After all, full autos are new and rare, restricted to a few geofenced urban areas with nice weather and speed limits that make collisions less deadly. To really judge safety, we need to also look at how common autos are, what kinds of conditions they’re being used in, and what the statistics beyond fatalities can tell us.
I crunched the numbers, and by my analysis,4 there are 2,372 unique auto collision reports in the database, of which at least 88% are from these top three companies:
Waymo (Alphabet/Google) — 1670 reports — over 167 million miles on the road5
Cruise (bought by GM; now defunct) — 304 reports — ~5 million miles on the road
Zoox (Amazon) — 115 reports — somewhere around 5 million miles on the road?6
By this estimate, Waymo has around 10 reported collisions per million miles, Cruise had 60, and Zoox has around 20. While my estimate for the Zoox mileage might be off, such that it’s comparable to Waymo, Cruise was clearly worse. In addition to seeing this in the raw statistics, it’s probably worth mentioning an incident in October 2023 when a woman crossing at a crosswalk in San Francisco was struck by a hit-and-run (human) driver and pushed into the path of a Cruise robotaxi. The Cruise auto not only ran her over, but proceeded to stupidly drag her over 20 feet before it pulled over. The woman, thankfully, survived, but Cruise tried to cover up the fact that their car dragged the woman, only later admitting the error and having their permits revoked in California along with paying $612,500 in fines.7
Even focusing purely on Waymo, these safety stats don’t look so great when we compare them to human drivers. Each year in the USA, there are approximately 6 million car crashes reported to police, which result in about 40 thousand deaths. Since Americans drive about 3 trillion miles a year, that’d only be about 2 fatalities in ~150 million miles, which is a low enough number that the “zero deaths” from autos might just be an artifact of good luck, low speeds due to urban settings, and bias in how I’m judging things (e.g. not including the Uber prototype or the ASU motorcyclist). And more significantly, if we do the naive math on collisions, we might conclude that Waymos get in >5x crashes, since humans only produce about 2 reported collisions per million miles.
But, can this really be correct? Waymo released a safety report in October which tells a very different story. It claims a “90% reduction in crashes that cause a serious injury or worse,” “81% fewer injury-causing collisions overall,” and “82% fewer airbag deployment8 in any vehicle crashes.” That's a missing factor that makes Waymos 25 times safer than the naive data suggests! What are we to make of this gap?
First, let’s try to get closer to an apples-to-apples comparison. If I filter down to collisions where “Highest Injury Severity Alleged” is either “Serious” or “Fatality,” I get six Cruise incidents and six Waymo incidents, including the two fatalities I mentioned at the start of this essay. Six incidents in 127 million9 miles is ~0.05 per million miles, which is more than double the 0.02 that the report cites. It turns out that Waymo is throwing out three incidents that are showing up in my data:
February 2025: At 3:39am, a Waymo in Chandler, Arizona crossed an intersection. An SUV in a dedicated turn lane beside the Waymo continued straight, went up into the sidewalk, crashed into a street light, and spun out into the lane the Waymo was in, leading the Waymo to rear-end them. The driver of the SUV was transported to a local hospital for treatment. The Waymo passengers (despite not being buckled up!10) were mostly fine.
October 2024: A Waymo in San Francisco came to a stop in a queue of traffic for a red traffic light in the rightmost lane an intersection. A passenger car traveling crossed the double yellow line and crashed into an SUV that was in the left lane, beside the Waymo, pushing the SUV into the Waymo. Someone involved went to the hospital, but the report doesn’t say who.
May 2024: At 12:58am, a Waymo in Los Angeles was being tested on the freeway. A speeding car tried to pass the Waymo, but didn’t fully leave the lane when doing so, clipped the Waymo’s rear left corner, and spun out into the median. Again, the report doesn’t say who went to the hospital, but I have a guess.
To my eyes, none of these are Waymo’s fault, with the potential exception of rear-ending the car that crashed into the street light. Still, the methodology on Waymo’s report says “this analysis included all collisions, regardless of the party at fault and Waymo’s responsibility,” and Waymo does include the two fatal incidents I mentioned at the start of the essay (as well as a November 2023 incident where a driver ran a red light). I have no explanation as to why Waymo only seems to be counting half the incidents. The report claims that human drivers have a serious-injury collision rate of 0.23 per million miles, so they’d still be able to claim a 79% reduction even with them included.
Let's take a moment to fact-check the 0.23 serious injuries per million (human) miles driven statistic. Waymo says it comes from two papers written by their safety team: Scanlon et al. (2024) did a base analysis, and Kusano et al. (2025) extended it. To get this figure, Scanlon et al. went through government databases for crash reports and restricted their analysis to surface streets (and excluded large trucks). They found approximately 130 thousand serious crashes per year and estimate that 2.14 trillion miles were driven on surface streets per year. These numbers look about right, and account for the fact that urban environments have more crashes, but fewer serious crashes. Naively, this gives a serious incident rate of only 0.06 per million miles.
To adjust this number up, the Waymo analysts do two adjustments:
Each collision involves an average of 1.75 vehicles.
Imagine if you’re counting “handshakes with (at least one person named) Bob” per person named Bob (analogous to auto collisions per autonomous mile) and comparing that number to the number of handshakes per person (analogous to collisions per vehicle mile). Because most people aren’t named Bob, the first number will mostly be the average number of handshakes people named Bob make, but because each handshake requires two people, the total number of handshakes per person will be about half of the Bob-specific number.
(Interestingly, over 81.8% of the autonomous collisions involved a human vehicle, which means that if we naively assume that each of those involves just one other car (which is an undercount, as evidenced by our first example fatality), then autos are only 70% as likely as a human driver to collide with a stationary object or pedestrian per collision. I think this makes sense, and follows from autos having fewer blind-spots and never being drunk.)
When Scanlon et al. adjust for this factor, they get a rate of 0.11 per million miles.Kusano et al. do a fancy spacial analysis that tracks what parts of the relevant cities are dangerous and how much time drivers spend in those areas and concludes that if drivers drove on the same streets that Waymos are tackling each day, then the actual incidence rate would be closer to 0.24 per million miles.
After skimming both papers and thinking about it, I trust the adjustment from Scanlon and I don’t trust the one in Kusano. Even if the analysis is correct in some sense, there’s a selection effect in this particular analysis getting applied (and not other analyses that adjust the bottom-line in the opposite direction).
But even after removing the factor of 2 from Kusano, and the factor of 2 from Waymo’s data randomly missing half of the incidents, we get a serious collision rate of 0.05 vs 0.11 per million miles, meaning Waymos probably get in less than half as many serious crashes as humans.
Returning to our earlier 25x gap, if we take the analysis of serious incidents as indicative, then we can explain perhaps a factor of 4 by Waymo’s thumb on the analytical scale, and a factor of 2 from the Scanlon vehicles-per-collision adjustment. But this still leaves a missing factor of about 3x.
The answer is simple: over half of collisions involving human-operated vehicles do not get reported to police. The reporting requirements on autos are very strict, and can involve basic dings, including with stationary objects. Just going on my personal experience, I have way more than 3 times as many unreported collisions compared to ones that are serious enough to involve the police or even insurance.
So, my best guess at the bottom-line, apples-to-apples comparison is that Waymo is juicing their safety stats, and that they only get into collisions slightly less often than human drivers. But, I think it’s also clear when looking at the individual case reports that almost all serious incidents are the result of human error from other drivers on the road! This means the bottom-line of Waymo’s safety report is still basically right, regardless of what’s going on with their analysis — the USA could likely save tens of thousands of lives each year by adopting autos at large scale. It’s hard to say exactly how much safer the roads would be, but if the technology continues to mature, I think autos could very plausibly reduce fatalities and serious injuries by over 80%.

The Lidar Wars
Let’s return to the discrepancy between fully automatic vehicles, like Waymo, and the driver-assistance systems, like Tesla. Just looking at them, a big discrepancy stands out: Waymos have a giant, spinning machine on the top (and other add-ons). The gadget is an omnidirectional lidar sensor — a way of mapping the auto’s surroundings by shining infrared laser light and watching for reflections. Teslas, on the other hand, are mostly just equipped with normal cameras. Elon Musk has said “Lidar is a fool’s errand. Anyone relying on lidar is doomed. Doomed! [It’s] expensive [and] unnecessary.”
Is Musk right? How expensive and/or necessary are the various sensors that Waymo (and nearly every other auto company) adds to their vehicles?
Musk’s quote is from April, 2019. Just one month earlier, Waymo announced that they had successfully reduced the unit cost of their omnidirectional lidar sensor by 90% — from $75k to $7.5k. Since then, prices have dropped a little bit, but not a ton. An estimate for the 5th generation Waymos has a total sensor cost of $9300, the 6th generation in production now are supposedly cheaper, but my guess is it’s around the same. By contrast, the cameras on a Tesla are only a few hundred dollars.
Let’s turn to whether lidar is necessary. Unfortunately, it’s not really possible to compare Teslas and Waymos for a bunch of reasons. Most fundamentally, any gap in the safety statistics might be from a software edge, or even just a difference in the basic characteristics of the vehicles, like braking ability. But also, Tesla “full self driving” requires a human driver behind the wheel, and that introduces a cluster of complications, such as a bias where the driver is most likely to engage the system in contexts where they think it’s safe, or where the miles-per-accident is lower than average (such as the freeway). When the user is forced to step in due to a critical disengagement (~360 per million miles), would the car have crashed, otherwise?
Tesla’s own report says 0.2 “major” collisions per million miles. Unfortunately they don’t use the same notion of severity as either Waymo or the government data we were looking at before. And we can’t easily check it against the government data, because that data doesn’t distinguish between the premium FSD software11 vs the primitive “Autopilot” included for all drivers. In theory we could check Tesla’s experiments in deploying robotaxis in Austin, Texas, but that program is just a few months old and doesn’t have any public statistics.
What we can do is compare that to their estimated human rate of 1.4 major collisions per million miles, giving approximately a 7x reduction — right in line with Waymo’s claimed safety benefits. For minor collisions the story is the same. But I notice that the report claims that (human) Tesla drivers are between 1.4 and 3.2 times safer than average. This is… false. Tesla drivers have the highest accident rate of any brand of car in the USA.12 No explanation is given for the discrepancy, and in general, Tesla has a poor record of accurately reflecting reality.
Let’s zoom out. The average vehicle travels 156 thousand miles in its lifespan. If we estimate ~$500B in total damages from collisions per year in the USA, then combined with the 3 trillion miles traveled per year, we can calculate that the typical vehicle deals $26,000 in expected damage. Suppose that vision alone can reduce the accident rate by 60%, and with lidar it can go down 85%. That’s an expected savings of $6,500 from having a lidar sensor.
In other words, despite Tesla’s sketchy statistics, I conditionally agree with Elon Musk. If you can reach human parity purely through software, which I see no reason you couldn’t, this could plausibly near-eliminate drunk, distracted, reckless, and sleepy driving — the significant majority of damage sources. While lidar has some promise of making vehicles even more safe, we either need to learn to make it significantly more cheaply (perhaps by switching to solid-state devices) or we need to value human lives/safety more highly for it to be worth it in the long run.13
In the short-run, however, I think Tesla is almost certainly going down the wrong track. Regardless of where the technology is at, and whether they have an AI that can drive from camera data as well as a human, there is a huge amount of social resistance and fear around self-driving vehicles. Autos have literally killed zero people, but still people are working to restrict their deployment.
The first time a Waymo is genuinely at-fault in a car crash, the costs will be far greater than just the loss of life and damage to property. Waymo’s reputation will suffer a huge blow, and with it will come a massive obstacle to the continued rollout of a technology that is actively saving lives. Given this environment, it is wise for companies to err heavily on the side of safety and caution, even if it’s a little pricey.
Utopian Autos
Urbanists love to criticize US car culture. The solution, they say, is public transit — trains, busses, ferries, and trams — plus paths and sidewalks, of course. I agree that these are great ways to structure dense cities. But even in places with greater investments in public transit, like the densest parts of Tokyo or Hong Kong (two of the least car-centric cities), roads and trucks are still a vital part of last-mile logistics — stocking stores, picking up trash, and facilitating construction work. And as long as these roads exist for buses and essential services, it also makes sense to use them for private car rides.
Outside the dense urban core, cars become even more essential. But the average vehicle stands idle over 95% of the time. These unused vehicles take up space, clogging roads with street parking and consuming 22% of major US cities with sprawling parking lots and expensive garages.
In Utopia, things are more efficient.
Thanks to widespread numeracy, and greater levels of technological enthusiasm, investment capital, and overall wealth, Utopia has a road network where human drivers are only a small minority. Even in rural areas where parking is cheap and access to cars is essential, Utopian citizens default to hiring robotaxis as-needed, rather than owning a vehicle themselves. Robotaxi providers form long-term relationships with customers, and provide guarantees that commuters can get picked up without delay at known points in their daily routine.14 Thanks to this default, children, elders, partiers, and other people who have difficulty driving, have a higher level of mobility than in our world. And thanks to having lots of autos available in off-peak hours, people who travel at unusual times of the day often pay very little.
Having far fewer humans operating deadly machinery means the roads are also safer to cyclists and pedestrians. Utopian autos take a little longer to get from place to place than vehicles in our world, in part due to better adherence to speed limits. But, thanks to not having to pay attention to the road, these longer commutes actually involve more life lived.
What does “fully autonomous” mean? This blogger, for instance, argues that Waymos aren’t fully autonomous because they can’t operate everywhere and still frequently need human assistance.
Indeed, it’s true that Waymo employees frequently intervene and even tele-operate their cars. One source I found indicates that there is a 1:1 ratio of fleet monitors to vehicles — that each car has a remote driver! Waymo is deliberately cagey and doesn’t give concrete numbers, which is a bad sign. Even if they did, there is a question whether to believe them. Cruise claimed “During driverless operations there was roughly 1 remote assistant agent for every 15-20 driverless AVs” back in 2023, but was later accused by the NYT that the real ratio was more like 3 : 2. This Goldman Sachs report estimates 3 robotaxis per remote operator. Might this explain why the vehicles are carefully geofenced to places that have cell signal?
Almost certainly not. Genuinely remote-piloting a vehicle is dangerous. If there is a network outage, the Waymo would need to take over and safely come to a stop, at the very least. This help page (under “Why can’t someone just remotely take over driving?”) claims that Waymo’s remote assistants can’t fully take over, even if they wanted to. (My guess is that they can at very low speeds, but can’t operate the vehicle in a normal way.) Yes, they could be lying, but at some point we enter conspiracy theory territory where a company that employs thousands of employees needs to keep an easily-discoverable scandal a secret. When the NYT said Cruise had “1.5 workers per vehicle” my guess is that this includes not just remote assistants (both on and off duty), but also technicians, managers, cleaning people, and so on.
If we estimate that:
Roadside assistance is called about once every 500 miles.
The average auto drives at 25 mph, including time spent stopped.
Each ticket takes a mean of 12 minutes to resolve, mostly weighted towards a heavy tail of serious issues.
Then 0.05 tickets are generated per car per hour, demanding 0.01 hours of remote assistant time per vehicle-hour. This would mean that if each remote operator handled 100 autos, they’d theoretically be able to balance the ticket load. Unfortunately, the tickets are going to come in bursts at particular points of the day, and it seems reasonable to have 10 times as many staff on duty during peak times. I have no idea where Goldman Sachs’ estimate came from, and I encourage Waymo to openly publish their employment stats to dispel this myth.
At the end of the day, there is a huge difference between a vehicle which is designed to effectively handle emergency situations at high speeds without human involvement (ie SAE Level 4+) and one that is not (ie Tesla’s “Full Self Driving”). When I say “fully autonomous,” this is what I’m referencing.
There have also been a couple incidents of autos fatally running over nonhuman animals — one by a Cruise in 2022 and the case of “KitKat” in October 2025.
Approximately 5 million cats get run over each year by humans in the USA, and while there are probably ways that autos could be set up to be even safer, the overall picture for nonhuman animals is not significantly different than for humans.
Craft Law Firm incorrectly cites the death of Walter Huang as happening in Texas in 2019 instead of California in 2018. This is probably just a mistake, but I also find their description of the incident to be generally lacking.
Craft Law Firm’s numbers are different because their analysis goes between 2019 and 2024, rather than 2021 through 2025.
To get this number I took this report from mid July that claims 100 million miles and Waymo’s safety report that claims 127 million miles 10 weeks later, at the end of September. That’s 2.7 million miles a week, which is about the rate the first report claimed. (Since it’s growing we should expect the rate over the next few months is ~3.) At the time of writing it’s been another 15 weeks, so I estimate ~172 million miles.
Since companies are not required to report their mileage, I’m estimating based on this source that claims Zoox had 2.6 million miles (cumulative) at the end of 2024, and that despite their expansion and commercial launch in Vegas, they haven’t doubled their total miles. I could easily imagine it’s closer to 10 million miles, but I would be very surprised if it was above that.
Given the attempted coverup, I’m a bit offended by how low the fines were. Back when the incident occurred they paid $1.5M to the NHTSA and settled with the victim for ~$10M, but it still seems bad that the coverup itself wasn’t more sharply punished.
I thought that perhaps this was a result of Waymo being sneaky. After all, you don’t really need airbags if nobody is in the vehicle, and a lot of the time Waymos are empty. Similarly, airbags are usually a front-seat thing, and robotaxi passengers may prefer the back. But it turns out that Waymo airbags deploy normally, regardless of who is in the vehicle, and they deliberately used this metric because it would help get them an apples-to-apples way of comparing with humans.
The fact that riders preferentially sit in the back may still be helping their safety stats.
My data goes a little further then the Waymo report, which cuts off in September.
Waymos insist that passengers remain buckled, but these riders apparently fooled the car by buckling the belts behind themselves. I hope they learned their lesson.
I’d also appreciate knowing which version of the FSD software was running, since there can be a big difference.
Full disclosure: I drive a Tesla Model Y (and love it), and am definitely a bad driver. 😵
I bet that in the long run we can drop the cost of lidar by another order of magnitude, which will make it obviously worth it. I don’t have anything to base that speculation off besides techno-optimist priors, however, so ¯\_(ツ)_/¯.
Alas, the need to be able to transport a lot of people at rush hour means that Utopia also has a lot of vehicles sitting idle most of the time (especially at night). Having them be autonomous means they can get charged, cleaned, and repaired at night, so there’s less waste that way, but the logic of spikes and lulls is inescapable.


Excellent breakdown on how both Waymo and Tesla massage their safety numbers. The lidar cost-benefit calc is particularly insightful - $7.5k sensor versus $6.5k expected damage savings doesn't make economic sense longterm once the tech is mature. But i agree that durng the early rollout phase when one at-fault fatality could derail entire industry adoption, paying for the extra safety margin is smart risk managment even if it's not economically optimal.