Computer Vision and AI in the Self Driving Car

15 juni 2018 by NIO

Computer Vision and AI in the Self Driving Car

NIO is building the technology to make consumer adoption of level 4 self-driving cars a reality. We welcome all engineers and problem solvers who are passionate about building an autonomous future, which is why we are proud sponsors of the 2018 Computer Vision and Pattern Recognition (CVPR) Conference, hosted in Salt Lake City, Utah on June 18 – 22. We’re looking for people with expertise in Perception, Artificial Intelligence (AI), and Controls to join our teams to help us achieve our vision. Want to learn more?

At the crux of the level 4 technology are the Perception, Artificial Intelligence, and Control blocks that serve as the eyes, brains, and the limbs of the system. To construct and validate these blocks, it is extremely important to develop technology that creates high fidelity, real-world simulations with all the stochastic variations normally encountered while driving. Our Autonomous Driving and Artificial Intelligence teams work in all these areas with the goal of delivering an autonomous future to our users. 

The Perception team is responsible for the eyes of the vehicle: computer vision, LiDAR, RADAR, USS, sensor fusion, localization (Video 1) and world model representation. Our experts in sensor technologies and functional safety are responsible for both the research and development of algorithms and the build and deployment of the successful algorithms, which create the production quality software for cars to perceive the environment around them.


Video 1: Improved Localization using Visual Features and Maps

Much like the eyes transmit signals to the brain, our Artificial Intelligence team works closely with the Perception, Controls, and Simulations teams to develop the necessary decision-making software for self-driving cars. Our team of AI and machine learning scientists, and software engineers work on problems such as decision making under uncertainty, behavior prediction of traffic participants, traffic agents based on reinforcement learning, detection of rare events, and domain adaptation for synthetic scenes created using procedural content (Figure 1).

Inset Figure 1 Figure 1. Scene creation using Procedural Content

Our Controls team develops the algorithms that utilize and consume information from Perception and the Artificial Intelligence stack. Motion planning and controls are critical to create feasible trajectories and maintain robust actuation of the vehicle, with the ultimate aim of a safe, reliable and comfortable driving experience (Video 2).


Video 2: Demonstration of ego car (black) changing lanes

At the heart of any advanced autonomy system lies the hardware capable of adapting to the changing needs of Vision and AI learning. The seamless interface that our team is creating allows for powerful computer vision and AI algorithms to run on scalable, compact, and power-efficient hardware. Where many companies are just starting, NIO is already capable of equivalent autonomy performance in a compact and power-efficient compute board.

An autonomous future is coming in two parts: through the drive itself and inside the vehicle. What we call the digital cockpit will deliver this new experience through the Human-Machine Interface (HMI) with cameras, displays, a powerful embedded processor, and software. Just as when the first phone was shipped with a camera, the full vehicle experience is also poised for disruption.

The opportunity to apply computer vision and AI in the vehicle is limitless. We encourage you to stop by Booth #647 in the industry expo at CVPR to meet our team. You can also learn more about our open roles by visiting our careers page.

 

Tags: