Last week, tragedy struck in Tempe, Arizona, where Elaine Herzberg was struck and killed by an Uber automated test vehicle while crossing a darkened roadway. Clearly, something went wrong with Uber’s automated driving system.
Early reports suggest that the vehicle did not apply the brakes prior to the collision. The test vehicle was equipped with a Velodyne LiDAR array affixed to the top of the vehicle that should have detected and classified Herzberg as a pedestrian and allowed for the vehicle’s software to safely respond. But it appears the system didn’t properly detect and classify her while crossing. Or perhaps it did, but failed to accurately predict her movements and respond accordingly.
However, it is not clear if a human driver would have made this a survivable event. The Uber test vehicle, manned by a safety driver who may have been distracted, was traveling several miles per hour below the posted speed limit and dashboard video shows Herzberg crossing out of the dark into the vehicle’s path just before the fatal collision.
State and federal investigators are on the ground and it will take some time for their findings to be made public on exactly what went wrong. The roadway in which the crash occurred also appears to be hostile to pedestrian traffic, implicating street design as a contributing crash factor. But beyond that, we still know very little. Yet, this lack of knowledge has not kept proponents of the precautionary principle from calling for all sorts of government regulatory interventions—from a requirement that test vehicles certify to nonexistent standards to a nationwide prohibition on any public road testing in the near future.
Ms. Herzberg’s death was a tragedy, but it should not distract us from the lifesaving possibilities of automated vehicle technology. Some 35,000 to 40,000 Americans can be expected to die at the hands of human drivers every year, including around 6,000 pedestrians. And according to federal estimates, human error or misbehavior is a critical crash factor in around 19 out of 20 crashes. Advocates of new regulations on automated vehicles—for which a dearth in technical understanding means the government would largely be flying blind—often fail to acknowledge this fundamental risk-risk tradeoff.
Uber and government investigators should be as transparent as possible in conducting their inquiries and reporting their findings. Whether Uber is held at fault or not, the company should commit to identifying and rectifying any errors in its automated driving system—and gaps in its test-driver protocols—that may have contributed to this crash. If Uber finds that its technology and practices are not able to achieve at least a level of safety on the road equivalent to that of traditional human-driven cars, it should focus on working out these problems in closed-track testing and virtual traffic simulations before resuming public road testing.
Dramatic responses to tragic events rarely yield sound public policy. Politicians and regulators need to understand that rashly enacted, poorly informed regulation aimed at mitigating one risk could mean amplifying other, often existing risks that claim many more victims every year.
Instead of overreacting to tragedy, as is their tendency, politicians and regulators should continue their largely hands-off approach to automated vehicle regulation. Road testing is the best way to continue improving the performance of automated driving systems, as closed-track testing and virtual simulations cannot substitute for the complex messiness of real-life traffic. Throughout that process, engineers will have the time and information to write technical standards to better inform any future regulatory changes. But cutting off or greatly curtailing automated vehicle road testing would only forestall the radical safety improvements that can help end much of the death and destruction that occur on America’s roadways due to driver error.