Uber Announces Self-Driving Passenger Pilot, Raises New Regulatory Questions

It was just announced that Uber will soon begin piloting its automated vehicle prototype in Pittsburgh—with passengers. This is not the first automated vehicle passenger pilot, but it is the first passenger pilot involving highway vehicles. Previous pilots have used low-speed, geographically restricted automated shuttles, such as those conducted under the auspices of Europe’s CityMobil2.

Automated vehicle developers from Google to Delphi have been engaged in extensive public road testing. Low-level automation technology has been deployed to consumers in recent years, the most advanced consumer technology deployment being Tesla’s controversial Autopilot.

This is an exciting development, at least from a public relations standpoint. Normalizing the technology to the public remains the chief non-technical barrier. However, public policy challenges also loom large. Last month, CEI published my comprehensive report documenting every state-level “following too closely” rule, which restrict the testing and deployment of automated “platooning” technology. With this announcement from Uber, another potential state-level regulatory conflict crossed my mind.

To date, most automated vehicle public road testing involves teams of two: a test driver and monitoring engineer. The monitoring engineer in the vehicle is able to take notes on performance issues and monitor the system in real-time from a laptop. For now, Bloomberg reports that Uber’s passenger pilot will maintain both a pilot and co-pilot engineer:

For now, Uber’s test cars travel with safety drivers, as common sense and the law dictate. These professionally trained engineers sit with their fingertips on the wheel, ready to take control if the car encounters an unexpected obstacle. A co-pilot, in the front passenger seat, takes notes on a laptop, and everything that happens is recorded by cameras inside and outside the car so that any glitches can be ironed out. Each car is also equipped with a tablet computer in the back seat, designed to tell riders that they’re in an autonomous car and to explain what’s happening. “The goal is to wean us off of having drivers in the car, so we don’t want the public talking to our safety drivers,” Krikorian says.

But what about future testing and passenger deployment operations where the developers may wish to begin reducing their engineer crew sizes? One potential problem is most states actively prohibit televisions and television-like devices in view of drivers. There are narrow exceptions for safety and mapping devices. The Pennsylvania statute, 75 Pa. Stat. and Cons. Stat. § 4527, can be found here.

If the goal is to “wean [developers] off of having drivers [and engineers] in the car,” under the current rules and absent an operating waiver from the state police, there appear to be two options for real-time system calibration by engineers:

  1. Maintain the two-person crew size; or
  2. Maintain the test driver, but have the note-taking engineer monitor the vehicle in real-time from a remote facility.

A tweak to the rules could allow a single-person engineer crew to theoretically handle all of the real-time tasks: ready to take the wheel and brake in an emergency, while also monitoring the software in real time. Pennsylvania and other states should consider such an amendment to their driver display rules.

I suspect developers would wish to delay any single-crew testing until their software reaches a higher level of precision, but such a move has the potential to greatly increase testing by more efficiently deploying their engineer pool and thus greater precision as more data is used to calibrate automated systems.

And there’s the rub: a move too early could present safety hazards and bad PR, as has been seen with the fatal Tesla Autopilot crash, but could also allow for greater testing and more rapid improvements in safety. But despite the knee-jerk precautionary impulse from many, the best approach is for policy makers to take the back seat and let the developers decide for themselves when to safely deploy their technology.