In the near future I could be writing this column from the driver’s seat of my car while the vehicle steers itself along the highway. This highlights just one of the possibilities that self-driving and increasingly connected cars open up to people like me who are fascinated by any new technology.
Audi’s sales and marketing chief, Luca de Meo, one of the brightest auto executives I have ever met, perfectly explained what’s coming next when he said that moving from cars to connected cars is like the transition from the iPod to the iPhone. “The iPod was for storing music and pictures but when Apple added voice and data communication to turn it into the iPhone a whole new world opened up,” he said.
I’ve never been a fan of either the iPod or iPhone but smartphones changed my life, both professionally and personally. So has the iPad. I love being able to read my preferred newspapers in bed on my iPad before getting up for work.
So I cannot wait for fully autonomous cars to become available so I can work in my car instead of writing this column while sitting on a high-speed train to Ferrari’s headquarters near Modena, 300km from my home in Turin (with the person in the next seat looking at what I am writing).
However, there are tough challenges ahead before self-driving cars become commonplace on our roads. The number of regulatory hurdles autonomous vehicles face is massive. Insurance companies and automakers also worry about liability issues. Who will be responsible when an autonomous car is involved in a crash -- the human driver who is not driving or the manufacturer of the technologies that have the controls?
Another ethical difficulty will be to decide what the self-driving car should do when an oncoming crash cannot be avoided. For example, if a cat runs out in front of the car, should the vehicle swerve and risk another collision, say with a pole by the roadside, or should the car run over the cat to avoid an even worse accident? As a passionate cat owner, I would hit the pole, but this quandary highlights how self-driving cars will also need some kind of artificial intelligence to judge the car’s actions under all circumstances. Putting artificial intelligence into a car frightens me, probably because of seeing Stanley Kubrick’s "2001: A Space Odyssey" as a kid. In the 1968 movie, the ship’s sentient computer, Hal 9000, decides to kill the crew when faced with conflicting mission priorities. The film was fiction and I loved it.
Now computers are being given increasing control of cars and a computer would be in charge of the autonomous car I might be sitting in and working in. What checks and fail-safe mechanisms will we need to stop a self-driving car’s computer functions from becoming a destroyer instead of a helper? In a worst-case scenario, an autonomous car could hit the cat and the pole.