A part of automakers' transition into becoming mobility providers is creating vehicles that can drive themselves. A key partner in this quest is Nvidia, the Silicon Valley-based maker of chips powering graphics cards that play a key role in Audi's virtual cockpit. Nvidia has introduced a supercomputer that utilizes so-called "deep learning" to enable self-driving capabilities. Drive PX 2, which has the computing power of 150 MacBook Pro notebooks, helps vehicles recognize objects around them and decide what actions to take. Volvo will use Drive PX 2 in an autonomous-car pilot program that starts next year in Gothenburg, Sweden. More than 70 companies are working with Nvidia's lunchbox-sized supercomputer. Nvidia Director of Automotive Danny Shapiro discussed the potential applications with Automotive News Europe Managing Editor Douglas A. Bolduc.
Nvidia is known for the work it did on Audi's virtual cockpit. What's next?
We've gone beyond that into driver systems. Our graphics processors are so powerful they can be used for many things beyond graphics. What we’re doing now is actually interpreting sensor data. So the cameras or the lidar or radar are generating massive amounts of data as they are scanning and our processors can interpret that data and understand a full 360-degree environment. The processors are able to essentially build a 3-D model, everything that’s going on around our car in real time, and from that are able to plan a safe path forward for an autonomous vehicle.
This is your supercomputer, right?
Right. This system is the equivalent of 150 MacBook Pros. It will do 24 trillion operations per second.
Nvidia is a big believer in artificial intelligence. Why?
There's really no way to program a system that can handle any possible scenario. It just won't work. Artificial intelligence is essentially the way forward for autonomous cars. Modeling the human brain inside the car. And then instead of having to program everything directly, you can train it just like a human learns. Through experience, we can get the system trained to understand its environment and what to do.
Could you give an example?
Audi had been working with a smart camera manufacturer for two years developing particular algorithms, core street sign recognition, and had gotten up to about 96 percent accuracy. With our system -- with deep learning and the ability to train -- you don't have to write any specific code. They [Audi] spent four hours training for a test -- basically feeding it mass quantities of street signs and tagging them. In four hours of training they exceeded the level of perception from two years of development for hard-coded algorithms. It's remarkable. You really want to get to this 99.99 percent accuracy so there's still additional work to be done, but a lot of that is training and processing from a deep-learning perspective.