Why Tesla Autopilot ought to be awful, until it’s perfect

Joseph Perla
5 min readDec 3, 2020

Tesla detractors love to complain about how bad Autopilot is, how far it’s lagging behind Waymo. When using Autopilot, drivers must keep their hands attentively on the steering wheel at all times; it is so finicky, and risky. The visualizations on the dash jiggle frequently, indicating unreliable sensors. What the detractors don’t realize is that Tesla ought to always be just this awful, until the day the self-driving algorithm is perfect. Why? Because of something I like to call the L3 Barrier, and the Safety Paradox.

The L3 Barrier references driving levels. There are five levels of driving automation specifying the ability of a vehicle to operate on its own, without human intervention. L3 driving indicates that the autonomous mode works in some but not all cases, thus it needs to intermittently transition control back to the driver. L3 driving is considered by many in the industry to be risky and unsafe, since the frequent and unpredictable transitions of control can cause risky complications. If the driver is not fully alert when a transition needs to happen, it can cause an accident.

Humans can be quite lazy, so they will quickly start to trust or depend on an autonomous system if it appears to drive safely and effectively. In fact, humans merely need to watch a car drive itself for a few hours before they stop watching the road 100% of the time. The perceived risk of the autonomous system defines the L3 Barrier. If the car seems like it is about to crash every mile, then people stay fully alert, below the L3 Barrier. If the vehicle has not needed an intervention in the last 1,000 miles of a road trip, its perceived safety is higher, which can cause the driver to divert attention from the road, to search through a bag, to check their phone, or to comfort a baby in the back seat, when above the L3 Barrier.

Tesla needs for its perceived safety to stay below the L3 Barrier. Exceeding the L3 Barrier, at its scale, would cause accidents and even deaths. Of course, there’s always the odd YouTube video of someone watching TV or falling asleep behind the wheel. Those are a handful of risk takers purposely pushing it further than intended, risking not just themselves but others, unconscionably. Given the number of miles Tesla has already driven under proper driver supervision and the scant number of associated Autopilot-related accidents, it has shown that it has a V1 of software that is below the L3 Barrier, which is safe enough while supervised, and worthwhile enough to buy and drive.

How does Tesla keep improving the algorithm while still staying below the L3 barrier? It simply needs to alternate regularly between V1 of the software and the latest version of the software with the True Latest Tesla FSD Safety. The V1 algorithm combined with human supervision has been proven safe, after billions of miles of testing. The latest version is even safer on its own. Together, alternating, the driver will still need to intervene frequently, not fully trusting the car, staying below the L3 Barrier. It is essential for the supervising driver to frequently practice taking over.

Is Tesla doing this? Perhap not. Maybe their software versions are improving and starting to cross the L3 Barrier in perceived quality. If true, this presents grave risk and Tesla should take measures to bring the perceived safety down (as described above) to keep their drivers alert, and to ensure they keep practicing taking back driving control.

This is the Safety Paradox. Tesla cannot launch its safest, newest version of the algorithm for every driving mile because that would, paradoxically, cause more accidents since the safest, newest software would be above the L3 Barrier. Being above the L3 Barrier would make the driver’s attention wane. Mistakes would slip past supervision causing more accidents than humans. Tesla must keep about a good proportion (maybe half) of the miles driven running the V1 algorithm in order to stay below the L3 Barrier.

Tesla’s self-driving software runs whether or not the driver activates it, whether or not the driver paid for the FSD package. That means it is collecting data and can protect you even if you’re driving yourself, as normal (without Autopilot). Tesla has already launched many active safety features during normal driving. To observe as Tesla perfects its autonomous driving capability, one simply needs to measure serious Tesla accidents under normal driving without Autopilot, but with active safety features. When that number approaches 0, then Tesla’s self-driving feature has learned to correct for human driving, as if one has a virtual bumper car, protecting and cocooning the driver. Even a bad human driver won’t be able to get the car into an accident. According to Tesla’s latest safety report, their active safety features are only 50% safer for drivers than without, nowhere near the 10,000% safer they need to get. Tesla has a long way to go.

Tesla keeps increasing the cost of FSD as it adds more features. This has little cost to them, which in turn enables Tesla to offer more hardware under that price. That price could easily support affordable LiDAR systems. (LiDAR (light detection and ranging) is a remote sensor that measures the distance of an object from the earth’s surface, using a pulsed laser to calculate the object’s variable distances. Laser light pulses generate 3D data about the earth’s surface and about the target object.) The newest iPhone has LiDAR integration, and many companies now offer low cost LiDAR systems. These new LiDAR systems will continue to improve the True Unsupervised Safety Level of the best algorithm. However, the perceived quality of the Tesla system day-to-day will still remain terrible quality and below the L3 Barrier, as it ought to in order to save lives.

--

--