ORLANDO, Florida, USA -- For self-driving vehicles, safety remains in the eye of the beholder.
Dozens of companies are rolling out hundreds of autonomous cars and trucks on public roads across the U.S., yet there are no clear standards on how safe these vehicles should be during testing or commercial deployment.
Ten years after Google launched its self-driving car project, the question of "How safe is safe enough?" remains as essential and nebulous as ever.
"There's no single-bullet solution or single standard," said Nat Beuse, head of safety at Uber's Advanced Technologies Group and a former NHTSA official who oversaw automated-vehicle development at the nation's top auto safety regulator. "There are many approaches out there, and I think that's a healthy thing."
Beuse was among those gathered this month here at the sixth annual Automated Vehicles Symposium, a conference that brings together industry, government and academic leaders examining self-driving technology.
During the conference, Uber unveiled its Safety Case Framework, which provided fresh details on how the company has rethought its approach to safety in the wake of a March 2018 fatal crash involving one of its self-driving vehicles. Broadly, Uber says its vehicles should be proficient, fail-safe, continuously improving, resilient and trustworthy.
A key principle of the framework: The company's self-driving vehicles should be "acceptably safe" to operate on public roads.
But defining what may be acceptable in the realm of automotive safety remains an arduous task that cuts across areas such as technical, legal, regulatory, cultural and perhaps even medical. Company-by-company efforts such as Uber's continue, and experts say industrywide discussions have matured.
Yet codifying safety is nowhere near complete.
Bursting a mileage ‘myth'
"What we are most lacking as an industry is an intellectually honest debate and a path forward on the safe development of autonomous vehicles," says Robbie Miller, a former Uber engineer who is now chief safety officer at Pronto, a technology company developing Level 2 driver-assistance systems for semi trucks.
In a series of blog posts this year, Miller shared an analysis that suggested self-driving vehicles are involved in crashes at rates higher than their conventional counterparts and questioned prevailing assumptions about how AVs should be tested.
Among them: the notion that companies need to conduct millions of miles of public-road testing to validate safety of self-driving vehicles.
Companies such as Waymo tout their mileage totals as a benchmark of progress, but Miller says miles, at best, don't equate to quality testing and, at worst, subject pedestrians and human drivers to risk, which "undermines a real safety conversation."
He's not alone. At the symposium, Trent Victor, Volvo Cars' senior technical leader for crash avoidance, called progress measured by mileage a "myth" and "an infeasible approach."
Apples to apples
Rather than log mileage and disengagements, instances when a human safety driver retakes control, Victor said the industry needed to find a way to compare specific scenarios in similar geographic and operational conditions to measure the effectiveness of automated technology.
Using an example culled from real- life crash data, he shared a case in which a pedestrian appeared from behind a truck in the roadway. A human driver reacted in 0.6 second; later tests with a driver-assist system showed braking began after 0.2 second.
The real-life crash and subsequent system tests occurred in the same conditions: good weather on dry roads.
"This is real-life data, and this is what we have to answer, what the level of safety is," he said. "What we have to define is, 'This is the level of safety with this as the reference.' "
Such apples-to-apples comparisons might help society calibrate its appetite for risk in a self-driving age. And they may further clarify the performance of self-driving vehicles beyond industry insiders.
Yellow-light laws
"Society is expected to show significantly lower acceptance of accidents caused by AVs compared to humans," said Lutz Eckstein, director of the Institute of Automotive Engineering at RWTH Aachen University in Germany. "Ultimately, there's going to be cases that go to court, and a jury will say, 'Would a human driver have avoided this accident? Yes or no?' "
Safety can be more subtle than crashes.
Aurora Innovation co-founder Chris Urmson, in his keynote address at the symposium, said state laws regarding yellow traffic lights pose a challenge for ensuring that self-driving vehicles operate legally.
Most states have permissive yellow-light laws — drivers can enter an intersection so long as a light is yellow without regard to the color of the light as they exit the intersection. But in other states, it's illegal to be in the intersection as the light turns red.
"This is one of the things as we try and build safety into these vehicles," he said. "There's the potential for two hazards here. One, that we go when we should have stopped, and the second is that we brake when we should have gone."
Self-driving technology has enhanced the possibility that someday traffic deaths and injuries will be dramatically reduced on U.S. roads.
In the early years, measuring declines in crashes and deaths may be one way to benchmark the safety progress of automated vehicles. But that's complicated and hinges on the widespread adoption of self-driving technology and might take decades to produce meaningful statistics.
"Those are outputs," Beuse said. "As time has gone on, people realize those are components of safety, but not the overriding ones."
And yet, perhaps those measures are coming full circle. In a world in which safety remains an evolving concept, carnage might provide the most specific benchmark available.
"Safety is inherently human-centric, because it's our bones and flesh," Victor said. "It's human. Fatalities have to do with the tolerance levels of our bodies. Understanding that gives us a target level for safety."