Dave Guilford
Dave Guilford
Managing Editor

Can autonomous vehicles be developed without risking lives?

Many suppliers and automakers, such as BMW, are developing autonomous systems.
Other blogs

Back on March 18, 49-year-old Elaine Herzberg was walking her bicycle across a street in Tempe, Arizona, when she was struck and killed by an Uber autonomous vehicle.

Her death is the greatest tragedy in the incident. But it also has sent shock waves across the global community of autonomous-vehicle developers, researchers and regulators, even consumers.

Those waves rippled through the Canadian Auto Innovation Summit held by the federal government in Detroit four days later.

Listening to experts, it was hard to feel entirely comfortable about the push to test autonomous vehicles on public streets.

The crux of the situation, to me, is that autonomous vehicles are being sold as the enablers of a new era of automotive safety. The oft-cited statistic that human error is behind 94 per cent of crashes is trotted out, contrasted with the theoretically flawless performance of vehicles driven by algorithms, lidar and cameras.

The ultimate goal is most clearly stated by the Vision Zero movement out of Sweden. The aspiration is that autonomous vehicles, linked to other vehicles and the infrastructure via two-way data flows, would not crash. Ever. No accidents, no serious injuries, no highway deaths.

I’m not arguing against that goal, but perhaps we need to acknowledge how monumental it really is.

At the Detroit summit, Ziad Kobti, director of the University of Windsor School of Computer Science, said autonomous-driving systems require reliability far beyond what traditionally defines artificial intelligence.

He cited the Turing Test, developed in 1950 by British computer pioneer Alan Turing.

It sets the standard for artificial intelligence as being able to equal the performance of a human being. But, as Kobti put it, developers must create “a god” that is far better than human drivers, who cause the clear majority of one-million-plus global traffic deaths every year.

Another point: The Uber car was not fully autonomous. It had an operator who was supposed to take control in a dangerous situation. Video from the car appeared to show that the driver didn’t have eyes on the road prior to the crash.

Nikolas Stewart, autonomous-vehicle program manager at the University of Waterloo, noted that human “safety drivers” get bored when babysitting an autonomous system.

Paradoxically, Stewart said, as systems become more reliable, human drivers are likely to become even less attentive, making timing intervention less likely.

It’s easy to become enamoured of the accident-free future with autonomous cars. And I honestly hope the vision materializes. But in the interim, we had better realize that we’re playing with human lives.

You can reach Dave Guilford at dguilford@crain.com. -- Follow Dave on

Have an opinion about this story? Click here to submit a Letter to the Editor, and we may publish it in print.

Or submit an online comment below. (Terms and Conditions)


 
Newsletters & Alerts
  • Sample
  • Sample
  • Sample
  • Sample