Analysis of the technical route of autonomous driving vehicles: The technical routes of the two major camps are very different

Publisher:清晨微风Latest update time:2019-04-25 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

Whether it is an automobile manufacturer or an Internet company, the realization of autonomous driving of automobiles all adopts the technical framework of environmental information perception and recognition - system intelligent decision-making and control. Autonomous driving technology integrates automatic control, complex systems, artificial intelligence, machine vision, etc., collects information such as Internet of Vehicles data, geographic information data, and environmental perception data from cloud and on-board sensors, identifies the environmental characteristics of the vehicle driving area, and performs task setting and control planning.


Figure 1 Basic technical solution for autonomous vehicles

The development of autonomous driving technology has differentiated into two camps: the ADAS and single-vehicle intelligent technology camp represented by automobile manufacturers , and the artificial intelligence and networking technology camp represented by Internet companies. The ADAS and single-vehicle intelligent technology camp mainly starts from the existing driving assistance safety technology, cooperates with perception and control decision-making, and gradually realizes intelligent autonomous driving technology; the artificial intelligence and networking technology camp directly relies on intelligent computing and network communication to realize the control of the car. In addition, there are certain differences between and within different technology camps in terms of system integration and function realization.

Overall plan

The artificial intelligence and networking technology camp represented by Internet companies such as Google and Baidu focuses on the guidance of high-precision positioning, and uses artificial intelligence algorithms to achieve fully autonomous driving in conjunction with sensors such as lidar , cameras, millimeter-wave radar, ultrasonic sensors, and GPS. Its technical core is the construction of high-precision maps and the matching and fusion of the perception features of various sensors. The ADAS and single-vehicle intelligent technology camp represented by automobile manufacturers such as General Motors, Volvo, Tesla , and their parts suppliers Bosch and Mobileye, relies on cameras, millimeter-wave radars, ultrasonic sensors and other equipment, focusing on the precise perception of the driving environment, and realizing advanced assisted driving functions under certain constraints. Its core competitiveness is the technical accumulation of ADAS and a large amount of commercial experience.

The comparison of the development paths of smart car technology between automobile manufacturers and Internet companies is shown in Figure 2. From the perspective of technological development rules, automobile manufacturers rely on the continuous improvement of ADAS technology and functions, aiming to relieve drivers' driving pressure and improve driving experience, and gradually put forward high-level autonomous driving cars based on their perfect vehicle manufacturing experience; while Internet companies rely on artificial intelligence technologies such as deep learning and image understanding, aiming to replace human drivers with computers, and based on their advanced Internet, cloud services and other technologies, they realize fully autonomous driving cars that directly replace traditional cars.


Figure 2: Development paths of autonomous driving technology for automakers and Internet companies

Internet companies tend to directly achieve high-level autonomous driving. The core technology of these companies, deep learning algorithms, use high-performance processors to simulate multi-layer neural networks, allowing machines to master the ability of autonomous learning. Through the training of road scene calibration data, the neural network can accurately detect traffic factors such as cars, pedestrians, signs and markings, and non-motor vehicles in real time. This technology requires the continuous training and improvement of autonomous driving models through the collection of massive data to improve the deep learning ability and autonomous decision-making ability of the automotive system.


Figure 3 Deep learning algorithm for the recognition of cars, pedestrians, and traffic signs and markings (Horizon AI vision chip )

The purpose of Internet companies developing self-driving cars is to achieve major exploration and innovation in the field of artificial intelligence and layout of technological frontiers. They have rich software development experience, strong talent pool, and efficient software development and testing processes, so they can realize the function of self-driving earlier and more aggressively without the pressure of monetization. On the other hand, Internet companies have a greater competitive advantage in artificial intelligence and human-computer interaction services, and their progress may be far ahead of traditional automakers who take the ADAS upgrade route.

Automakers generally believe that ADAS is a transitional stage to achieve autonomous driving. Through the expansion and improvement of ADAS functions, unmanned driving can be gradually achieved. At present, ADAS can automatically control the lateral and longitudinal movement of the vehicle, but this partial and separate single function aimed at assisting human drivers still does not make it a smart car. Automakers hope to gradually reach the final fully unmanned driving through multiple technical solution innovations and while obtaining corresponding benefits.


Figure 4: Automakers’ approaches to autonomous driving

The advantages of automobile manufacturers mainly lie in their technological accumulation, first-mover advantage and the ability to directly use customer resources to quickly iterate and optimize their intelligent systems, but their main business is the manufacture and sales of complete vehicles. Therefore, providing a better driving experience is the driving force of their research and development, and they still need to consider the monetization ability of their research and development results. These concerns may limit their autonomous driving research to the ADAS field.

In summary, no matter which technical route is adopted, it is actually based on information perception and processing to achieve accurate recognition of the driving environment and build a high-precision environmental map to drive driving. The basic steps of the technical route are the same, but the implementation methods are different in each specific step. For example, in the appearance of self-driving cars, there are generally laser radar devices on the roof and a series of sensors installed around the body.

Environmental Perception

The core of autonomous driving technology is to realize the process of "environmental perception-decision planning-control execution" of the car. As the first link, environmental perception is at the key position of the interaction between autonomous driving vehicles and external environmental information. The key is to enable autonomous driving vehicles to better simulate the perception ability of human drivers, so as to understand their own and surrounding driving situations. Therefore, the quality of autonomous driving vehicles' perception of the road environment directly affects the safety and traffic capacity of the vehicle. As shown in Figure 5, autonomous driving vehicles need to integrate sensors such as cameras, lidar, microwave radar, infrared sensors, and ultrasonic radars to detect the road environment at short, medium, and long distances and at various angles, and to integrate and process the perception information and identify various relevant factors in the environment. Cameras, radars, positioning and navigation systems, etc. provide autonomous driving vehicles with a large amount of surrounding environment and self-status data.


Figure 5 Self-driving car and its sensors

Currently, there are two main technical routes for autonomous driving environment perception: one is to use a vision-dominated multi-sensor fusion solution, and the other is to be dominated by low-cost lidar.

Vision-driven environmental perception technology uses a fusion perception solution of multiple cameras, millimeter-wave radar, ultrasonic radar and low-cost lidar. Under current technical conditions, camera imaging is greatly affected by ambient light, and the reliability of target detection and positioning based on artificial intelligence is still low, but its advantage lies in the low cost of sensors.

Since the fatal accident in May 2016 when a Tesla electric car in autopilot mode collided with a truck in Florida, the United States, Tesla has replaced the visual perception and recognition function from the monocular vision technology provided by Mobileye to the Tesla Vision software system based on the Nvidia Drive TX2 computing platform, using deep learning algorithms to replace the video image recognition method based on traditional machine learning, which is also the current autopilot solution provided by Internet companies. As of the end of 2018, Tesla's accumulated autopilot mileage through Autopilot has reached nearly 2 billion kilometers. It believes that the current autopilot reliability is 98%, but it needs to reach 99.999% to meet the safety level.

LiDAR-dominated perception technology uses a solution that combines LiDAR, millimeter-wave radar, ultrasonic sensors, and cameras for identification. LiDAR uses an active laser ranging mechanism to form a laser point cloud image to describe the distribution of surrounding obstacles. It has high reliability in target detection and positioning, but lacks color and texture information of the surrounding environment and is costly.

In the future, LiDAR-led solutions will continue to advance commercialization in two directions: one is to develop hardware modules that combine cameras and LiDAR to directly obtain color laser point cloud data. The other is to reduce the hardware cost of LiDAR, such as developing solid-state LiDAR and truly realizing industrialization.

[1] [2]
Reference address:Analysis of the technical route of autonomous driving vehicles: The technical routes of the two major camps are very different

Previous article:Waymo tests new LiDAR L4 self-driving car factory implementation
Next article:KTH develops small LiDAR that is lighter and more cost-effective

Latest Automotive Electronics Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号