The internet erupted last week following the publication of a full-page ad in the New York Times asserting “Don’t be a Tesla Crash Test Dummy” and impugned “Elon Musk’s ill-advised Full Self-Driving robot car experiment.” Full Self-Driving (FSD) is a paid beta upgrade to Tesla’s advanced driver-assistance system Autopilot and is limited to specified drivers. Dozens of media covered the ad, O’Dowd appeared on MSN, and Fox, and Twitter warriors engaged.
The Dawn Project, whose mission is “Making Computers Safe for Humanity. We Demand Software that Never Fails and Can’t Be Hacked,” sponsored the ad. The ad and organization are the brainchild of Dan O’Dowd, CEO for Green Hills Software who is concerned about automation safety and cybersecurity.
Musk tweeted, “Green Hills software is a pile of trash,” to which O’Dowd responded, “When @elonmusk is wrong he always resorts to insults, remember the pedo guy? FSD is the worst trash software ever shipped by a respectable company. Green Hills Software is the operating system for B1-B nuclear bombers, F-35 fighter jets and Boeing 787s.” Tesla did not return a request for comment.
FSD: Unsafe at any speed?
A link to the ad’s Fact Check includes videos of automated Tesla cars driving into pedestrians and a Bloomberg story on the recall of some 12,000 self-driving Teslas for a braking problem. It then describes Dawn Project analysis of the FSD software and suggests that the cars make 1000 times more critical errors than humans, collisions as frequent as every 36 minutes of driving. The California DMV reports that a critical self-driving error happens every 8 minutes. Tesladeaths.com totals 231 deaths in Teslas since 2013, and 10 deaths in which Tesla Autopilot was at fault.
Tesla offers safety figures for miles driven, not time driving. Tesla’s 4th Quarter Vehicle Safety Report reports one crash for every 4.31 million miles driven with Tesla Autopilot. For its drivers without Autopilot, the stats are two-thirds better, one crash for every 1.59 million miles driven. Tesla compares its performance to the National Highway Traffic Safety Administration (NHTSA) figure of one crash every 484,000 miles, an apparent claim that Telsa safety is better. However, it’s likely that Tesla figures are weighted favorably with highway driving in good conditions whereas NHTSA data cover driving in all conditions. Tesla statistics are not available for total number of accidents, time in between accidents, or for FSD.
Philip Karle of the Institute of Automotive Technology at the Technical University of Munich explains that the International Standards Organization offers a set of safety standards for the development for software for self-driving cars, including the principle of Safety Of The Intended Functionality (SOTIF), the absence of unreasonable risk due to hazards resulting from functional insufficiencies of the intended functionality or by reasonably foreseeable misuse by persons. Tesla Autopilot conforms to SAE Levels of Driving Automation™ Level 2.
In a January 2021 earnings call, Musk declared that Tesla would achieve SAE Level 5 (Full Autonomy) by the end of the year, though Tesla later walked back the statement. Consumer Reports observes confusion created from car makers using different names for common self-driving features. For example, while Tesla’s term “Full Self Driving” suggests the Full Autonomy of SAE Level 5, its technical capabilities fall short.
Market Manipulation
O’Dowd, himself a billionaire and owner of three Teslas, suggests that Musk underplays the safety risk to speed Tesla to market and pressure risk averse competitors. Richard Windsor, PhD, a leading financial analyst of the tech industry, agrees. Windsor downgraded his view of Tesla because of Musk’s insistence to rely on cameras and his belief that computing will solve the safety problem, a “brute force approach to AI.” The rest of the industry accepts that cameras must be complimented by sensors and greater roadway instrumentation. Windsor dismisses Tesla’s safety data because, unlike competitors, it is not collected in California.
Public Policy Implications
The Dawn Project ad’s subtext is that pedestrians have been enjoined in a dangerous self-driving car experiment without their permission. Ironically Tesla drivers can opt out of online personal data protection collection, but pedestrians can’t opt out of the world of Tesla or its camera surveillance.
O’Dowd is concerned about more than the driving safety of a single Tesla, rather the cyber safety of the entire fleet. SAE levels refer to automation, not cybersecurity. In practical terms, self-driving cars, just like other connected devices, can be hacked. O’Dowd founded the Dawn Project to advocate for better cybersecurity for any life-threatening product or service on the internet. That its Bugcrowd page offers rewards up to $15,000 to highlight its software vulnerabilities suggests that cybersecurity is on Tesla’s mind too.
Externality or unintended consequences is a key concept in public policy. We have safety regulation for food, pharmaceuticals, aviation, finance, energy, communications, and other domains. Policies for self-driving cars are still emerging, and cyber risk is not yet fully understood. O’Dowd is concerned about the gap, citing the notion of military grade security with zero failure or un-hackability. He thinks too many people put Tesla in the category of fallible consumer electronics, when it should be judged in the un-hackable category. While it’s ok that a laptop’s “crashes”, it can be disastrous for a car.
Stay connected with us on social media platform for instant update click here to join our Twitter, & Facebook
We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.
For all the latest Technology News Click Here