Fatal crashes on public roads have caused a radical shift in public attitudes to autonomous vehicles (AVs). It's reached the point where, if public opinion continues to deteriorate, the entire future of AVs might be in jeopardy -- or at least greatly slowed.
It seems strange that the much greater dangers of a human driver at the wheel of a conventional vehicle are widely accepted by the public. Why? And what can we do to encourage public confidence in AVs?
It's an issue I have discussed extensively with the industry -- and relevant authorities -- and it seems a major stumbling block is the lack of widespread, independent safety regulation. The public knows that conventional vehicles are strictly regulated, and that human drivers have to take a test, yet there are no equivalent standards for AVs. The U.S. and the EU, for example, publish voluntary guidelines only.
It's not that the industry is unconcerned about AV safety -- far from it. Ensuring safety is something developers work very hard on. This year, 11 industry leaders (including Audi, BMW, Daimler and Volkswagen) published an in-depth whitepaper on the issue of AV safety.
But we have to accept that commercial organizations might bend their own safety standards in the race to market -- hence self-certification is not an option. Instead, there is an absolute need for regulated safety standards verified by an external authority; and that is where the problems get sticky.
Sadly, we cannot rely on real-life test drives. As yet, there is no agreement on what constitutes a "permissible failure rate" for AVs but, for the sake of argument, let's borrow from the aircraft industry and accept there will be a catastrophic failure every billion operating hours.
To test this in real-life driving, we would need to drive an AV for several billion hours and repeat the tests multiple times. Yet in October 2018, Waymo, the world's most advanced ADS contestant, reported that its fleet had reached the remarkable and unrivaled threshold of 10 million self-driven miles on public roads -- whereas thousands of times this figure would be required for statistically valid results. And then every time there is a software update, the road-testing data could become obsolete.
Should we instead seek an objective assessment of the software coding? Again, it proves extraordinarily difficult. Without a human in the loop, the AV must take a reasonable decision in an infinite number of situations. Human drivers possess intuition -- for example, that it is acceptable to violate certain traffic rules to avoid hitting a child. It's not so easy to program an AV with that subtle judgment. And while some ADS developers have hugely sophisticated testing procedures, the technical details are largely kept secret for competitive reasons.
So how do we move forwards? I see three key steps:
First, what does "safe enough" mean? The 90 percent reduction in fatalities often cited is unrealistic, and if AVs fail to achieve this, we risk a backlash that could massively hinder the use of a technology able to save thousands, if not millions, of lives. We need a realistic figure to aim for.
Secondly, while it sounds counter-intuitive, we believe that gradually increasing the level of driver assistance is the wrong approach for manufacturers developing AVs.
Most of the great automation promises are only realized when the driver is taken right out of the loop and, with partial automation, there are paradoxically some serious safety risks, such as overconfidence in the software, lack of driver attention and reduced driving skills. I believe that "direct to full automation" is a preferable development path.
Thirdly, transparent AV safety regulation is essential. Regulators are far from a solution, but we need to continue to work together. (And I have not even touched on the threat of malicious cyber attacks.)
Only if we do this -- the industry and regulators working together to tackle the many obstacles on the road to fully automated driving -- can we achieve an automated transport system that will save a great many lives.