The process of realizing autonomous driving

Publisher:学海星空Latest update time:2022-07-25 Source: elecfans Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

The process of achieving autonomous driving can be simply described as from perception, decision-making to execution. Perception is the collection of information from the vehicle itself and the outside world through various sensors. Decision-making is the car's computing unit that analyzes the information obtained above according to specific algorithms and makes decisions suitable for the current situation. The next step is execution.


Algorithms are extremely important in the whole process. As one of the important application scenarios of artificial intelligence technology, the realization of autonomous driving technology is inseparable from the large-scale deployment of algorithms, including feature extraction from the perception link to the decision-making of neural networks. These all require algorithm improvements to improve the accuracy of obstacle detection and decision-making capabilities in complex scenarios.


AI algorithms are the most critical part of supporting autonomous driving technology. Currently, mainstream autonomous driving companies have adopted machine learning and artificial intelligence algorithms to achieve this. Massive data is the basis of machine learning and artificial intelligence algorithms. Through the data obtained from the sensors, V2X facilities and high-precision map information mentioned above, as well as the collected driving behavior, driving experience, driving rules, cases and surrounding environment data information, the continuously optimized algorithm can identify and ultimately plan routes and manipulate driving.


From a technical perspective, autonomous driving domain algorithms can be divided into perception algorithms, fusion algorithms, decision algorithms, and execution algorithms. Perception algorithms convert sensor data into machine language for the scene the vehicle is in, including object detection, recognition and tracking, 3D environment modeling, object motion estimation, etc.


The core task of the fusion algorithm is to unify the dimensions of data obtained by different sensors based on images or point clouds. As L2+ autonomous driving increases the requirements for multi-sensor fusion accuracy, the fusion algorithm will gradually move forward (front fusion), and its level will gradually move forward from back-end components such as domain controllers to the sensor level, completing fusion inside the sensor to improve data processing efficiency.


The decision-making algorithm gives the final behavioral action instructions based on the output results of the perception algorithm, including behavioral decisions such as following, stopping and chasing the car, as well as action decisions such as steering and speed of the car, path planning, etc.


Autonomous driving is divided into L0-L5 levels according to the degree of automation functions. L1-L3 mainly serve as auxiliary driving functions. After L4 level, vehicle control can basically be handed over to the artificial intelligence system.


Different levels have different functions to implement and require different algorithms. For example, L1's ACC adaptive cruise control, LKA lane departure assist, AEB automatic braking, and BSM blind spot monitoring require the use of ACC system control algorithm, LDW lane departure warning algorithm, LKA lane keeping assist algorithm, AEB automatic braking algorithm, and BSM blind spot monitoring algorithm.


For example, L3+ requires TJP traffic jam assistance algorithm, HWP highway assistance algorithm, urban road automatic driving algorithm, highway automatic driving algorithm, AVP automatic parking algorithm. L5 requires various automatic driving algorithms to realize corresponding functions.


Different manufacturers have different capabilities in providing algorithms. For example, traditional TIer1 manufacturers, such as Bosch, Continental, Desay SV, and some software algorithm manufacturers, can provide some algorithms for single functional modules, which can be better applied to L1-L2 assisted driving. Algorithm solution providers such as Momenta, Minieye, UISEE Technology, and ZongMu Technology can provide complete ADAS or autonomous driving solutions.


Reference address:The process of realizing autonomous driving

Previous article:Analysis of V2X performance requirements for intelligent driving and intelligent transportation
Next article:Enabling Semiconductor Innovation in the Electric Vehicle Ecosystem

Recommended ReadingLatest update time:2024-11-16 19:42

Mercedes-Benz's new steering wheel is covered with capacitive sensors: making system monitoring easier
A few months have passed since the 2021 Mercedes-Benz E-Class sedan was unveiled, and in a few days, the car company will bring the all-new E-Class Coupe and Convertible. In between, Mercedes-Benz decided to reveal some more details about a biotech component of the E-Class lineup. It is not difficult to find from t
[Automotive Electronics]
Mercedes-Benz's new steering wheel is covered with capacitive sensors: making system monitoring easier
Design of network camera based on CMOS image sensor OV7720
Network cameras have networking functions and maintain the functions of analog cameras to the maximum extent. They are undoubtedly the new stars in the field of monitoring. Network cameras generally use finished CCD cameras as the video capture front end. CCD cameras account for a large part of the hardware cost, and C
[Analog Electronics]
Design of network camera based on CMOS image sensor OV7720
Should the sensor choose a digital interface or an analog interface?
Sensors are commonly used devices used to detect changes in physical states and quantify the measurement results at a specific scale or range. Generally, sensors can be divided into two types: analog and digital sensors. So which type of sensor is more suitable? Let's take temperature sensors as an example. The diffe
[sensor]
[51 MCU Quick Start Guide] 6.4: DHT11, DHT22 single bus temperature and humidity sensor
STC89C52 Windows 10 20H2 Proteus 8 Frofessional v8.9 SP2 Keil uVision V5.29.0.0 PK51 Prof. Developers Kit Version:9.60.0.0 Hard Knowledge Excerpted from "Temperature and Humidity Module DHT11 Product Manual", "Digital Temperature and Humidity Sensor DHT22" In the typical application circuit, it is recommended to u
[Microcontroller]
[51 MCU Quick Start Guide] 6.4: DHT11, DHT22 single bus temperature and humidity sensor
SmartSens Launches 13MP Resolution High-Performance Image Sensor for Mobile Applications
June 27, 2024, Shanghai, China — SmartSens (Shanghai) Electronic Technology Co., Ltd. (Shanghai) has launched a new 13MP resolution 1/3.06-inch image sensor for mobile applications - SC130HS. SC130HS is SmartSens' first mobile image sensor built on the SmartClarity®-XL process platform . The new SC130HS is m
[sensor]
SmartSens Launches 13MP Resolution High-Performance Image Sensor for Mobile Applications
Piera develops new air quality sensor that can measure PM0.1 to PM10 simultaneously
Startup Piera Systems is combining air quality monitoring with specially designed chips to provide users with better and more granular information. Piera makes air quality sensors, but at the heart of the sensor is a newly designed custom chip that provides more processing power than traditional chips use
[sensor]
Piera develops new air quality sensor that can measure PM0.1 to PM10 simultaneously
Using PSoC I/O Analog Multiplexers to Simplify Sensor Control Design
Cypress's CY8C21×34 programmable system-on-chip (PSoC) mixed-signal array has an I/O analog multiplexer. Since each pin can be used as an analog input, a single SoC can easily implement control applications that require a large number of different types of sensors. This article describes how to use this device to simp
[Analog Electronics]
Using PSoC I/O Analog Multiplexers to Simplify Sensor Control Design
A brief analysis of traffic capture and video surveillance front-end sensor technology
CCD and CMOS are the two main imaging technologies at present. They are produced in different manufacturing process backgrounds and still have their own advantages and disadvantages in terms of current technology. The selection of CCD or CMOS camera should be based on the applicable environment and requirements. The
[Security Electronics]
A brief analysis of traffic capture and video surveillance front-end sensor technology
Latest Embedded Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号