In light of Tesla’s foolish decision to rush distraction-causing, low-level automation technology to car consumers, which has resulted in at least one fatality and several accidents, many in the professional generalist commentariat have weighed in on what to do. In a country where more than 35,000 were killed in traffic accidents last year, national conversations about one or two additional highway fatalities, tragic as they are, are completely unwarranted. This is particularly true when the New York Times editorial board is attempting to lead the conversation.
Most automated vehicle developers have taken a cautious testing approach and have been disturbed by Tesla’s recklessness. Tesla’s strategy is fairly simple. Elon Musk’s electric car company was several years behind more advanced automation developers such as Google, Bosch, and Volvo. One thing they needed more than anything else was real-world driving data to calibrate their automation software and sensor arrays, and allow their engineers to more rapidly address errors and gaps in the system, something their more cautious competitors were slowly gathering, albeit with better software and sensors.
Tesla looked around and discovered their early-adopting owner pool would make the perfect guinea pigs. In their short gamble of less than a year, Tesla managed to collect real-world data on more than 130 million vehicle miles traveled (VMT), nearly a hundred times more VMT than Google’s multi-year public road testing effort. But now at least one person is dead.
Tesla’s mistake is not likely to be repeated, at least on such a large scale. First, most competing developers were privately horrified that Tesla was deploying inferior technology known to pose severe distraction risks to operators. Second, the coming litigation will make any similar high-risk endeavors unpalatable for insurers and corporate counsel. The odds are good that Elon Musk is speaking to an angry Tesla lawyer at this very moment.
As automated vehicle proponents have been saying for years: don’t expect widely deployable technology until it has been demonstrated to be several times safer than existing manually driven vehicles. It is also important to note that even when much safer automated vehicle technology comes to market, people will still die on the roads. Robots will never be perfect, but they can be much better than we humans at certain tasks. The promise of far safer roads, not impossibly and perfectly safe “Vision Zero” roads, should be our focus.
The New York Times editorial board appears to have talked to at least a few people who knew what they were talking about in laying out the potential benefits of automated vehicles. Unfortunately, the New York Times editorial board is still the New York Times editorial board, and generally their only solution to the constant stream of social ills is more top-down control by unnamed politicians and bureaucrats: more laws, more regulations, more enforcement. What these laws provide, what these regulations require, and how these rules are to be enforced are just minor details best left to the experts in Washington.
In perfect Times fashion, after highlighting that leading developer Google had years ago explicitly rejected the low-level automation technology involved in the fatal Tesla crash, the editorial board demands that the National Highway Traffic Safety Administration (NHTSA) “study how automakers can minimize driver distraction.” Fortunately for the Times, NHTSA has been doing so for some time, but advanced human factors studies have been underway for years in private settings—not just within the skunkworks labs of the Googles and Boschs, but at private research universities such as Stanford and Carnegie Mellon—from where these companies hire many of their engineering wizards. Why not ask them too?
But it gets worse.
In the closing three paragraphs, the Times editorial board makes the following claims:
- NHTSA needs to speed the deployment of vehicle-to-vehicle (V2V) communications, which could have plausibly saved the dead Tesla driver;
- Federal regulators should take to heart the lesson with early deployment of airbags, which killed women and children; and
- NHTSA should be prepared to update its rules rapidly and frequently.
On the first, I have written about why NHTSA’s looming V2V dedicated short-range communications (DSRC) mandate will actually harm automated vehicle development and do little to promote highway safety. The V2V DSRC mandate is itself a major distraction funded by self-interested auto companies who, having forgotten about the sunk-cost fallacy of throwing good money after bad, refuse to give up on already obsolete technology because they’ve already thrown more than $1 billion down the V2V DSRC hole.
On the second, the Times is absolutely right that early-generation airbags posed unacceptable safety risks for children. But they omit a key detail that explains why they appeared on the market as rapidly as they did. In reality, customer demand did not bring about these early airbags; rather, bureaucrats in Washington mandated them against the wishes of market participants.
Since the early days of the mass produced automobile, car companies repeatedly stressed that the most dangerous part of a car was “the loose nut behind the wheel.” In the 1960s and ‘70s, activist Ralph Nader took issue with Big Auto blaming bad drivers for their accidents. Of course, human error was and is a factor in over 90 percent of crashes. His protégé, Joan Claybrook, was then appointed by President Carter as NHTSA administrator.
Claybrook actively disputed the efficacy of and resisted driver safety education efforts, arguing in essence that people shouldn’t be responsible for their dangerous cars and that they should expect the auto companies’ airbags to do all the work for them. As hard as it may be to imagine today, Claybrook and Nader advocated airbags as a substitute, not complement, to seatbelts. Automakers were obviously furious that something so stupid and deadly had become the safety lodestar for their primary regulator.
But right there with Nader and Claybrook was the New York Times editorial board. In editorial after editorial, the Times editorial board published thousands of words in favor of the Nader-Claybrook forced-airbag policy.
Ultimately, the bureaucrats won and passive safety systems were mandated. Unfortunately, this political airbag obsession came with a cost: early airbags were known to kill children. According to Dr. Leonard Evans, renowned auto safety expert and author of the definitive Traffic Safety and the Driver, airbags increased the fatality risk for belted children by 31 percent. And kill they did, as the Times points out in its latest editorial, with no apology for its role in promoting the government mandate that led to those completely preventable airbag deaths.
Third and finally, and something that really demonstrates the Times’ tenuous grasp of auto safety broadly and automated vehicles specifically, NHTSA itself has already explained why a number of federal motor vehicle safety standards are inappropriate for automated vehicles and that the key is not additional and more rapid rulemakings, as the Times suggests, but allowing for alternative methods of compliance and crafting performance-based rules rather than technical regulations.
Currently, a number of NHTSA’s safety rules impose incredibly detailed design specifications on manufacturers. What is needed is flexibility, not more rules or more frequent rulemakings, which will never be rapid given the realities of the Administrative Procedure Act.
If we wish to see lives saved, an overhaul of the nation’s auto safety regulatory regime is needed—and not in the way the Times will reflexively demand. The Times, in demanding new rules aimed at preventing the rarest of events, will functionally be calling for more people die in conventional, manually driven cars as safer automated vehicles are delayed from entering the marketplace and at higher costs. This is madness and should be rejected by anyone who values human life over their ideological bubble.
In short, the New York Times editorial board raises a few good points about Tesla’s mistakes, but these are drowned out by their unquestioning devotion to regulatory process for process’s sake and their fundamental inability to admit they and their political allies lack the knowledge to engineer the perfect car—let alone a perfect world—and that their hubris will cause more preventable deaths if we let them try.