Waymo Gambles Lives with Unsupervised Self-Driving Cars

When I worked at Facebook, we had a saying: “this journey is only 1% finished.” This was meant somewhat literally in that when we had “only” 100 million users: we were only 1% of the way to having everyone who will be on Earth in 100 years (about 10 billion). It helped remind us that even with all our success, we were only just starting the journey and there were literally billions more people left to reach.

Waymo is not far ahead of everyone, nor almost done solving driverless. Waymo is in fact less than 1% finished.

Image for post
Image for post

Waymo recently released its latest safety report. It has only trained and verified the autonomous system for 6 million miles. I praise them for being more transparent than anyone else on reporting this. Unfortunately, this transparency indicates clearly that they cannot know whether their system is as safe as humans. Because humans are surprisingly safe at driving, 1 fatality per 100 million miles in the US, you need many billions of real-world driving miles to prove that you have a lower fatality rate than that (and likely at least that many miles to capture every edge case to learn to drive as safely as humans in the first place). They can do all the simulations they want, but the brutal truth is that we don’t know how well those simulations will reflect the real world until it’s actually tried. When we try 1 billion miles, then we will all know the true safety level, all things considered. With this few million miles, they can only know that it is at best 1000X less safe than humans. Risking lives.

And Waymo knows this. I know they know this because it is filled with brilliant people who have done the statistical calculations and already concluded the same. We calculated the same numbers when I was at Lyft Level 5. That’s why Waymo is not truly rolling it out publicly, and the number of unsupervised miles is only 60,000. Waymo ran the calculations. Even at its current worst-case safety level of 1000X less safe than humans, they know there is a <1% chance of any serious accident if they drive only tens of thousands of miles unsupervised. They probably won’t see an accident, even though the system could be 1000X less safe than humans. At a high probability, there will be no public relations mess–most likely. As long as they keep the number of miles low. That’s why they can’t scale up the program, the riskiness would become obvious as we would see many many more serious accidents if they actually did 1 billion unsupervised autonomous miles. Waymo is not ahead of everyone, it’s not even 1% there. Waymo is risking lives today to get the PR value of saying they have 60,000 unsupervised miles.

But can’t you test with fewer miles? If you get fewer minor accidents, then doesn’t that tell you about the frequency of major accidents and fatalities? Not at all. It is a very strong assumption to believe that the distribution of accidents is the same for humans as for autonomous vehicles (would you risk your life on an assumption?). It’s an assumption, and there is no evidence for it. Moreover, we already know that it is not true. The distribution is different. For example, autonomous vehicles have much more frequent minor rear-end fender benders than humans, due to the fact that they are more aggressive at braking to avoid at-fault collisions (by the way: this is a trick Waymo uses to claim they don’t cause accidents, by trying to blame their bad driving and overly aggressive braking on humans behind them). The distribution of accidents already is very different and will always be different for autonomous vehicles. It’s different because the cars have totally different failure modes and distributions of failure modes. Humans get into accidents due to distractions, sleepiness, and alcohol, often at night. LIDAR works just as well or better at night as during the day. Autonomous vehicles will never have these problems, but instead they have outdated maps, sensor failures, actuator failures, software bugs, prediction failures, and never-before-seen object issues. These failures can cause them, in broad daylight, to drive directly into odd objects, short children, avoidable pedestrians and jaywalkers, trucks, and walls head-on full-speed (accidents that humans almost never have).

Restricting the routes to easier routes or slower maximum speeds doesn’t actually make the problem any easier. If you restrict autonomous driving to safer routes, then you ought not to compare safety to humans overall, but you ought to compare to humans on those same routes and speeds. That means you need to achieve an even higher human level safety of maybe 1 fatality in 10 billion miles or more. Not only is that an even much higher bar to reach, but it’s not even economically (or physically) possible to drive a trillion miles to prove that you are as safe as a human on those routes.

Moreover, Waymo cannot get the requisite number of miles to either prove safety or truly get all the edge case long-tail test data to make it safer than humans. It’s too expensive to pay trained safety drivers, since you need more data and more miles to get edge case data as you get safer and safer. It would cost hundreds of billions of dollars or more to drive the billions of needed miles. It’s not even close to possible to get enough paid rides in remote Arizona to drive billions of miles. The unsupervised rides save them a bit of money, but risk lives, so at what cost to the population, to us?

Image for post
Image for post

As a comparison, Tesla gathers test data cheaply. Tesla had at least 10 billion miles of data driven this year worldwide in every condition, and customers paid them for the privilege to drive them as (unpaid, non-professional) supervising safety drivers.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store