Among dozens of companies developing lidar technology for autonomous vehicles, Luminar has emerged as an early favorite of major automakers. Over the past year, the Silicon Valley-based company has announced partnerships with the Toyota Research Institute and Volvo Cars. Now Luminar says it is working with Audi’s autonomous-driving subsidiary.
Together, Luminar and Autonomous Intelligent Driving plan to work together toward fully self-driving deployments that are currently slated for 2021. Luminar’s high-powered lidar will play a key role in helping Audi’s self-driving systems detect obstacles on the road ahead at a range of 250 meters (273 yards).
“Perception remains a bottleneck for autonomous mobility and we quickly worked to find the most powerful sensors to make the perception task easier, said Alexandre Haag, AID’s chief technology officer.
Headquartered in Munich, AID is a wholly owned subsidiary of Audi. Initially set up to focus on urban mobility, the subsidiary is now looking at building self-driving systems for a wider array of applications, including highway driving where a long-distance lidar-like Luminar can make a substantial contribution.
Earlier this month, Audi said it would invest nearly $16 billion in future-minded technologies, like electric mobility and autonomous driving over the next five years. AID is a central part of those plans. Launched in March 2017, the system being developed by AID is expected to spread across multiple brands within the Volkswagen Group. The company is testing vehicles in and around Munich.
Luminar may be one of the few companies currently capable of producing lidar units at scale. The company opened a 125,000-foot manufacturing facility in Orlando, Florida. Besides the Toyota Research Institute, Volvo and Audi, the company says it is working with 13 other OEMs and has contracts valued at $1.5 billion.
Last month during the Los Angeles Auto Show, Luminar founder Austin Russell said the company has reached a key breakthrough, finding ways to use lidar returns to predict the movements of pedestrians he calls “pose estimation.”
“Seeing to 250 meters, even for dark objects and seeing extremely high resolutions, you can make out what those objects are and now identify the behavior and intention for those objects,” he said. “So it’s those kinds of things that we have solved end to end. For many of those types of requirements, we’re an order of magnitude ahead of the next best.”