Analysis of key components and technical trends of autonomous driving simulation platform

Publisher:Changsheng520Latest update time:2023-01-13 Source: elecfans Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

1. Key components of the autonomous driving simulation platform

The autonomous driving simulation platform needs to support vehicle dynamics simulation, environmental perception sensor simulation, traffic scene simulation, etc.


Vehicle dynamics simulation: Based on the model built with multi-body dynamics, multiple real components including the vehicle body, steering, suspension, tires, brakes, I/O hardware interfaces, etc. are parametrically modeled to achieve posture and kinematic simulation of the vehicle model during movement. Although it is a more traditional part, it is an indispensable and important foundation for building an autonomous driving simulation test system.


Environmental perception sensor simulation: mainly includes modeling and simulation of sensors such as cameras, lidar, millimeter-wave radar, GPS/IMU, etc. It is a key technology and an important link in building an autonomous driving simulation system;

Traffic scene simulation: It includes two parts: static scene restoration and dynamic scene simulation. Static scene restoration is mainly achieved through high-precision maps and 3D modeling technology; dynamic scene simulation can be created by extracting real road data through algorithms and combining them with existing high-precision maps, or by automatically generating complex traffic environments based on statistical proportions of randomly generated traffic flows and manually setting relevant parameters. Traffic scene simulation is an important guarantee for building an autonomous driving simulation system. Since vehicle dynamics simulation is relatively traditional and relatively mature, it will not be introduced in detail here. The following focuses on sensor simulation and traffic scene simulation.

1.1 Sensor Simulation

1.1.1 Three Levels of Sensor Simulation

Environmental perception sensor simulation is a crucial part of the simulation test of the autonomous driving system, which mainly includes: camera simulation, lidar simulation, millimeter wave radar simulation, positioning simulation (GPS, IMU). According to the difficulty of simulation, sensor simulation can be divided into three levels: physical signal simulation, original signal simulation and target level signal simulation.

1) Physical signal simulation: directly simulate the signal received by the sensor - the optical signal is the physical signal of the camera; the electromagnetic wave and sound wave signals are the physical signals of the millimeter wave radar and ultrasonic radar respectively.

2) Original signal simulation: Remove the sensor detection unit and directly simulate the input unit of the digital processing chip. For cameras, this is done through video injection; for millimeter-wave radars, this is done by directly injecting the signal into the FPGA/DSP signal processing module or PC signal processing program; for laser radars, this is done through point cloud signals.

3) Target-level signal simulation: directly simulate the ideal target detected by the sensor to the input of the decision-making layer; this signal is generally in the form of a CAN bus input signal or other communication protocol format input signal; for sensors such as cameras, lidars, and millimeter-wave radars, this can be achieved through the CAN bus.

7fc4d31e-914e-11ed-bfe3-dac502259ad0.jpg

So, how is the physical signal simulation of mainstream sensors achieved? For cameras, it is achieved through a video darkbox; for millimeter-wave radars, it is achieved through millimeter-wave radar simulators; for ultrasonic radars, it is achieved through ultrasonic radar simulation boxes; however, there is currently no effective solution for the physical signal simulation of lidars.


1.1.2 Basic Ideas of Sensor Simulation

1) LiDAR simulation idea: Referring to the scanning method of a real LiDAR, simulate each ray emitted and received by the LiDAR, and also find the intersection of the emitted rays with all objects in the scene.

The reflection intensity of the laser radar is related to the distance of the obstacle, the laser emission angle, and the physical material of the obstacle itself. In addition, the laser radar has a large detection range, the emitted laser beam is very dense, and there are multiple reflections and shielding in the environment, so the calculation of the returned laser beam is relatively complicated; it is difficult to simulate the echo of the laser radar signal more realistically. Most existing laser radar models directly calculate the echo signal based on the laser reflectivity of each physical material, and such calculations will inevitably have certain errors with the actual echo signal.

In addition, the parameter configurations that need to be supported by lidar simulation include installation position and angle, operating frequency, maximum detection distance, number of lines and horizontal resolution, vertical field of view angle and horizontal field of view angle.

2) The idea of ​​camera simulation: Generate realistic images based on the geometric space of environmental objects, and then add color and optical properties to the three-dimensional model through computer graphics according to the real material and texture of the object to simulate image synthesis.

For elements such as color and optical properties, physical rendering engines are generally used to achieve them. For example, Tencent TAD Sim, CARLA of the Autonomous University of Barcelona, ​​and Microsoft AirSim use Unreal Engine UE (Unreal Engine), Baidu Apollo simulation platform and LG's LGSVL Simulator use Unity engine.

Camera simulation needs to support the structure and optical characteristics of the camera lens, such as focal length, distortion, brightness adjustment, color space, etc.; support the adjustment of internal/external parameters and distortion parameters, such as camera installation position, resolution, operating frequency, field of view and distortion parameters, etc.; it needs to be able to simulate various complex weather conditions such as heavy snow, heavy rain, heavy fog, as well as lighting conditions in different time periods and weather conditions.

3) Millimeter wave radar simulation idea: According to the field of view and resolution information of the radar configured on the test vehicle, a series of virtual continuous frequency modulated millimeter waves are emitted in different directions and the reflected signals of the target are received. Due to the influence of multipath reflection, interference, reflecting surface, discrete units and attenuation, it is difficult to simulate the reflected signal.

Functions supported by millimeter wave radar simulation: adjustment of parameters such as installation position, angle, detection distance, detection angle, angle and distance resolution, etc. In addition, for some millimeter wave radars with both long-range and medium-range detection functions, the simulation needs to support parameter settings for both.


1.2 Traffic Flow Simulation

Traffic flow simulation first collects real traffic scenes through environmental perception sensors, imports them into the simulation platform after processing, and then directly reproduces or generalizes them to build more traffic scenes based on data-driven methods. For example, Waymo's traffic flow simulation adopts this method. By reasonably changing certain data features of the real scene, new traffic flow scenes can be generalized.


Building a high-confidence traffic flow environment is the primary condition for the smooth implementation of autonomous driving simulation tests. Traffic flow simulation models can be divided into macro models, micro models and meso models according to the size of the simulation model.


Macro models are effective tools for simulating large-scale traffic. The research object is a collection of multiple vehicles, which is regarded as a continuous flow. Aggregation behavior is measured by collective quantities such as traffic density and traffic flow, such as the average speed of traffic flow and collective properties such as average density. However, its limitation is that its application is limited to highway networks and is not suitable for simulating urban traffic that includes rich interactive behaviors between cars.

7fd21010-914e-11ed-bfe3-dac502259ad0.png

Discrete roads

The meso-model is between the macro-model and the micro-model. It combines the advantages of both and can simulate traffic details at different levels.


The main research object of the microscopic model is a single vehicle, simulating the dynamic behavior of each vehicle under the influence of other surrounding vehicles or pedestrians. The purpose of establishing a microscopic model is to describe specific vehicle behavior, which is suitable for urban traffic simulation and can be used to simulate traffic conditions at multiple lanes and intersections.

7fdf8100-914e-11ed-bfe3-dac502259ad0.png

Situations where lane change is necessary (Picture from the Internet)

Traffic flow simulation in autonomous driving simulation testing is mainly microscopic traffic flow simulation, which mainly studies the behavioral interaction between a single vehicle and the driver unit. In the field of traditional traffic engineering, microscopic traffic flow simulation is mainly an analytical model established to analyze human driving behavior. With the shift from human driving to machine driving, machine learning methods begin to play a role. For the simulation system of the autonomous driving system, the role of microscopic traffic flow simulation is mainly to learn human driving behavior by fitting real driving data or to obtain the optimal driving strategy through reinforcement learning.


2. Types and core capabilities of autonomous driving simulation test platforms

2.1 Different Types

Depending on the object being tested, the autonomous driving simulation platform can be divided into: model-in-the-loop (MIL), software-in-the-loop (SIL), hardware-in-the-loop (HIL), driver-in-the-loop (DIL) and vehicle-in-the-loop (VIL).

[1] [2] [3]
Reference address:Analysis of key components and technical trends of autonomous driving simulation platform

Previous article:Introduction to Automobile Electric Drive System Technology and Design
Next article:Electric vehicle standard system and three-electric test standards

Latest Embedded Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号