An in-depth analysis of the safety design of Baidu L4 autonomous driving overall system

Publisher:WanderlustSoulLatest update time:2023-01-04 Source: 谈思实验室 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

"Safety first" is the core concept and value of autonomous driving . The overall system safety design of autonomous vehicles is a complex system engineering, involving the core algorithm strategy design of the vehicle autonomous driving system,  hardware and software redundant safety design, remote cloud driving technology, full-process test verification technology, etc., and follows Requirements and design considerations for functional safety (ISO 26262) and expected functional safety (ISO/PAS 21448). Let’s review Baidu’s L4 autonomous driving safety system practices, which are divided into three-layer safety systems: main system safety, redundant safety system, and remote cloud driving system.

7ca6fb88-8994-11ed-bfe3-dac502259ad0.png

Figure 1 Baidu L4 overall system security design idea


Autonomous driving main system safety


The main system safety system ensures the safety of driving strategies and driving behaviors through the core algorithm layer of the in-vehicle autonomous driving system, which can also be called "strategic safety". Use the most advanced and reliable perception and positioning algorithms, predictive decision-making planning and control algorithms to deal with various scenarios on the road. In particular, it is necessary to ensure safety in driving strategies and behaviors when encountering difficult scenarios.


The safety of the autonomous driving main system is the safety design of the software and hardware combination suite. Software algorithms are the core of the entire autonomous driving system. A typical L4 autonomous driving algorithm system architecture mainly includes on-board operating systems , environmental perception, high-precision maps and positioning, predictive decision-making and planning, control and execution modules, etc.


operating system


The basic operating system is the basic software that runs on autonomous vehicles and is used to manage, schedule, and control vehicle software and hardware resources. Its main task is to provide real-time task scheduling, real-time computing task resource isolation, real-time message communication, system-level access control and other capabilities for the autonomous driving system, effectively manage system resources, improve system resource utilization, and shield hardware and software from the autonomous vehicle algorithm module. The physical characteristics and operational details carry core components of autonomous driving such as operation perception, positioning, planning decision-making and control. The operating system has the characteristics of high stability, real-time performance, and low latency (response speed is 250ms higher than that of a human driver).


Pan-sensory system


Environmental perception is a prerequisite for autonomous driving. The environment perception system integrates the advantages of multiple sensors such as lidar,  millimeter-wave radar , and cameras to achieve a 360-degree view around the vehicle body, stably detect and track traffic behaviors, speed, direction and other information in complex and changing traffic environments, and provide information for decision-making and planning. The module provides scene understanding information.


The perception algorithm adopts a multi-sensor fusion framework and can provide detection of obstacles up to 280 meters away . Based on deep neural networks and massive autonomous driving data, it can accurately identify obstacle types and stably track obstacle behavior, providing stable perception capabilities for downstream decision-making modules. The perception system based on the multi-sensor fusion solution forms redundancy through heterogeneous sensing channels, providing high fault tolerance for the autonomous driving system and thereby improving system safety.


In addition, the perception algorithm also effectively supports scene expansion through capabilities such as water mist noise recognition, low obstacle detection, and detection of special-shaped traffic lights and signs. In terms of traffic light recognition, the traffic light color and countdown detected by self-vehicle sensing can be cross-verified with the prior information provided by high-precision maps, while improving the ability to recognize temporary traffic lights to ensure reliability and safety.


High-precision maps and high-precision positioning provide autonomous vehicles with advance road information, precise vehicle position information and rich road element data information, emphasizing the three-dimensional model and accuracy of space, and displaying every feature and condition on the road very accurately. . High-precision mapping and positioning adopt a multi-sensor fusion solution of lidar, vision, RTK and IMU. Through the fusion of multiple sensors, the positioning accuracy can reach 5-10 cm, meeting the needs of L4 autonomous driving.


Predictive decision-making and planning control


The predictive decision-making and planning control technology module is equivalent to the brain of a self-driving car. Predictive decision-making and planning are the core modules of software algorithms, which directly affect the ability and effect of vehicle autonomous driving. This algorithm module is based on traffic safety specifications and consensus rules to plan safe, efficient, and comfortable driving paths and trajectories for vehicles. In order to better improve the generalization ability of the algorithm, data mining and deep learning algorithms are applied to realize intelligent planning of driving behavior.


After given the departure point and destination set by the vehicle, the system generates the optimal global planned path. The vehicle can receive the environment and obstacle information provided by the perception module in real time, combine it with high-precision maps, track and predict the behavioral intentions and predicted trajectories of surrounding vehicles, pedestrians, cyclists or other obstacles, taking safety, comfort and efficiency into consideration. Generate driving behavior decisions (following, changing lanes, parking, etc.), and plan the vehicle's operation (speed, trajectory, etc.) in accordance with traffic rules and civilized traffic etiquette, and finally output it to the control module to implement vehicle acceleration, deceleration, and steering actions. The vehicle control part is the lowest layer and communicates directly with the vehicle chassis. It transmits the vehicle's target position and speed to the chassis through electrical signals to operate the throttle, brake and steering wheel.


The goal of autonomous driving is to cope with complex traffic scenarios on urban roads and ensure that autonomous vehicles are in a safe driving state under any road traffic conditions. At the software algorithm layer, there are deep learning models trained based on massive test data to ensure safe, efficient and smooth traffic of autonomous vehicles in regular driving scenarios; at the safety algorithm layer, a series of safe driving strategies are designed for various typical dangerous scenarios. Ensure that autonomous vehicles can perform safe driving behaviors in any scenario. For example, in extreme scenarios such as bad weather and blocked vision, defensive driving strategies will be triggered, and safety risks can be reduced by slowing down and observing more.


Self-driving vehicles are more abiding by traffic rules and road right-of-way. When meeting other traffic participants at road intersections, and encountering vehicles competing for the right of way at high road rights, they will also consider slowing down and giving way based on the safety-first principle. avoid risk. When encountering high-risk scenarios such as "ghost probes", we will adhere to the safety first principle and adopt emergency braking strategies to avoid injuries as much as possible. With the accumulation of autonomous driving road test data and a large amount of extreme scene data, the autonomous driving core algorithm has continued to evolve through data-driven deep learning algorithm models, becoming an "old driver" who can predict in advance and drive safely and cautiously.


Vehicle-road collaboration


Vehicle-road cooperative autonomous driving is based on single-vehicle intelligent autonomous driving, and organically connects the "people-vehicle-road-cloud" traffic participation elements through the Internet of Vehicles to realize the connection between vehicles and vehicles, vehicles and roads, and vehicles and people. Dynamic and real-time information exchange and sharing between vehicles to ensure traffic safety. Through information interaction and collaboration, collaborative sensing and collaborative decision-making control, vehicle-road collaborative autonomous driving can greatly expand the perception range of a single vehicle, improve its perception capabilities, and introduce new intelligent elements represented by high-dimensional data to achieve group intelligence. It can help solve the technical bottlenecks encountered by intelligent autonomous driving of bicycles, improve autonomous driving capabilities, thereby ensuring the safety of autonomous driving and expanding the design operating domain of autonomous driving (OperaTI onal Design Domain, ODD).


For example, vehicle-road collaborative autonomous driving can solve the problem of bicycle intelligence being easily affected by environmental conditions such as occlusion and bad weather, and the problem of dynamic and static blind spots/occlusion collaborative perception. Intelligent autonomous driving of bicycles is limited by the sensor sensing angle. When blocked by static obstacles or dynamic obstacles (such as large vehicles), the AV cannot accurately obtain the movement of vehicles or pedestrians in the blind spot. Vehicle-road collaboration realizes multi-directional and long-distance continuous detection and recognition through the deployment of multiple sensors on the roadside, and integrates it with AV perception to achieve accurate perception and recognition of vehicles or pedestrians in blind spots by autonomous vehicles, and vehicles can make predictions in advance. judgment and decision-making control, thereby reducing the risk of accidents.

[1] [2] [3]
Reference address:An in-depth analysis of the safety design of Baidu L4 autonomous driving overall system

Previous article:What kind of intelligent driving system can give users a sufficient sense of security?
Next article:C2A partners with Valeo to enhance software-defined vehicle cybersecurity

Latest Automotive Electronics Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号