You are here

Skepticism of Driverless Car Hype Is Warranted But Shouldn’t Sacrifice Innovation and Safety

I came across a post by Ross Marchand of the Taxpayers Protection Alliance from last week, in which he rightfully laments that the “endless loop of driverless car hype continues, running autonomously of objective data and due scrutiny.” 

I, too, am constantly irritated by excessive optimism about the near-term deployment and capability of self-driving cars, most frequently from the technology press. This bad reporting has led to public confusion about automated vehicles and when certain applications may become available to consumers.

But the rest of the piece is an overly pessimistic portrayal of automated vehicle development. It hinges on an overreliance on flawed government data and concludes with a deeply misguided call for far stricter government regulation of automated vehicle testing and deployment.

Marchand begins:

Submitted data and associated reporting irregularities undermine the idea that autonomous technology is ready to take the wheel from human drivers. Continued reliance on voluntary reporting, rather than the state-monitored testing demanded of human drivers, will lead to gargantuan costs to taxpayers via emergency assistance and infrastructure damage—and a high human toll as well.

Before getting to his uncritical acceptance of existing flawed government data collected in California, Marchand makes an appeal to the precautionary principle, a central tenet of the left’s favorite school of risk management.

In a word, the precautionary principle requires innovators to prove that their new technology will not harm society, rather than place the onus on regulators and litigators to demonstrate that an innovation actually causes harm.

This anti-experimentation method of risk regulation is a central guiding light of the modern environmental movement. It has achieved great traction in some parts of the world, leading to the anti-science prohibition of genetically engineered foods in the European Union, for instance.

But contrary to supporters of the precautionary principle, it is only on its face an extremely risk-averse, prudent strategy. In reality, it creates new risks and is prone to reinforcing a certain type of decision error.

As the late, great political scientist Aaron Wildavsky noted, politicians and regulators are often driven to regulate new technologies out of a fear of regret:

If we invent a new drug, we may regret the side effects knowingly caused for hundreds of people; if we don’t invent it (or market it), we don’t have to regret an inaction that led to the death or morbidity of anonymous thousands.

Take an example of a new blockbuster drug, beta blockers for cardiovascular conditions, which came to market in the U.S. a few decades ago after years of regulatory delay. The Food and Drug Administration eventually approved beta blockers years after they had been available in Europe. While touting the thousands of lives that could be saved by their introduction once approval had been granted, the agency delayed introduction by seven years. As a result, it has been estimated that this overly cautious drug approval process led to 45,000 to 70,000 preventable deaths.

The New York Times will write front page articles linking a handful of specific deaths to a new drug’s side effects. The same coverage will not be given to the anonymous thousands who die due to regulatory delay. And, certainly, the Times will never fawn over regulators who expeditiously approve new lifesaving drugs, particularly if use of those drugs also manifest negative side effects in some patients.

Regulators are well aware of this public relations dynamic and are thus inclined to slow-walk approval to avoid negative press coverage and politician grandstanding. It is in regulators’ self-interest to delay approval of better products, but this runs completely counter to the interests of a healthy, wealthy free society.

Wildavsky emphasized that every decision is made under imperfect knowledge. That’s the definition of risk. And the only way to truly improve overall safety is to let trial-and-error discovery play out in decentralized settings—“searching for safety,” the title and theme of Wildavsky’s book, in which Wildavsky concluded:

Search is essential because uncertainty is ever present, because the safe and dangerous are intertwined, and because protecting the parts endangers the whole. Thus, safety remains a condition for which we have to search without knowing ahead of time what we will find. The most dangerous fear is fear of regret, because it restricts this search for safety.

Back to Marchand’s piece. He argues:

To backers and observers of autonomous technology, California’s annual release of “disengagement” data is an important barometer of progress made in driverless vehicle safety and reliability. The Golden State has a reputation for being a regulatory stickler when it comes to autonomous technology. Companies logging driverless miles on public roads are required to submit data logging the instances in which a tester had to manually halt autonomous operations due to either a technological failure (i.e., a software or hardware issue) or unsafe driving on the part of a vehicle. Analysts have largely focused on Waymo (formerly Google) data, since the company now has three years of logged data and has accumulated the most miles on public roads by far. 

Many informed backers and observers of automated driving systems have actively opposed California’s data-collection requirements as “garbage in, garbage out.” Ian Adams of the R Street Institute argued forcefully against any mandatory disengagement reporting. CEI also submitted coalition comments to the California DMV arguing against the same requirements.

Sure, reporting on how many times a test driver disengages the automated system sounds positive on its face. But thinking about what reporting is actually required and the interaction between reported data and testing practices should make anyone of a pro-innovation, pro-market bent highly skeptical of this government demand.

The problem is that the disengagement reports submitted by permitted automated vehicle testing companies in California are not an apples-to-apples comparison. We don’t know which specific technologies or roadways were tested. One limited insight from these reports is that some testing occurs only within specific geographic areas.

In the case of GM/Cruise, we know they traveled exclusively within the city limits of San Francisco on “complex city streets.” But we don’t know how much road testing was done on large arterial roads versus small residential streets, for instance, or how much testing was done on especially tricky roadway segments and repeated numerous times to work out the kinks—thereby resulting in test data that skew to the most difficult situations.

In any event, none of the California testing currently conducted is statistically representative of normal human driving, which makes meaningful comparisons with previous company test data, other automated vehicle testing companies, or traditional driver-directed vehicles next to impossible.

Another problem that could result from California’s flawed disengagement reporting requirements—although there isn’t evidence this is occurring right now—is to incentivize testing companies to operate on the least complex, easiest road segments. That would allow them to inflate their test miles while reducing the probability of encountering a situation requiring disengagement. So, they’ll look good on paper, but that paper won’t tell us anything about the capability and reliability of the technology.

Unfortunately, Marchand fails to grapple with the problems with California’s flawed “garbage in” disengagement data. He then makes statistically unsound comparisons between year-over-year company data, cross-company data, and national crash statistics, and then calls for the “garbage out” policy response: banning the testing of automated vehicles on public roads in favor of closed track-only testing until the government can develop adequate test procedures.

Marchand’s argument against on-road automated vehicle testing is the precautionary principle in practice—an approach that precludes the necessary search process for actually fostering safety.

Worse, the precautionary principle is especially unsuited given that regulators at the National Highway Traffic Safety Administration currently have no idea what additional test procedures they may incorporate in the future self-certification of self-driving vehicles. Further, modernizing the regulatory process to incorporate Marchand’s preferred safeguards likely will take the better part of a decade once the rulemaking proceeding is initiated.

In the meantime, consumers may well be denied far safer vehicles. Driver error and misbehavior are factors in more than 90 percent of crashes. If the safer self-driving cars of tomorrow are delayed by well-meaning advocates motivated by fear of regret, we may have thousands of additional deaths on America’s roads. But fortunately for advocates of the precautionary principle, if they succeed in delaying the introduction of safer vehicle technologies, those additional deaths won’t make The New York Times’ front page.