Validation of smart car radar and camera models

Publisher:素心轻语Latest update time:2020-08-13 Source: eefocus Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

European PEGASUS pointed out that virtual simulation testing is required for level 3 and above autonomous driving systems. Virtual sensing is an important part of virtual testing. The quality of the virtual sensor model determines the degree of realism obtained in the simulation. Using environmental perception sensors in a virtual environment for simulation modeling testing can not only reproduce various road conditions and test dangerous driving conditions in a short period of time, but also break through time limits and conduct uninterrupted testing, which can greatly shorten the product testing cycle and is relatively low in cost.

 

When building a sensor simulation model for a smart car, it can generally be divided into physical model modeling and functional model modeling. The physical model simulates the specific physical structure and physical principles of the sensor, reflecting the physical characteristics of the sensor.

 

In addition, the testing and verification of smart car controllers requires a complex traffic environment, and the traffic vehicle simulation also requires a smart car model with sensors. The amount of calculation is greatly increased, and the requirements for computing efficiency are more stringent. The calculation of each sensor needs to be completed in microseconds. Most of the above two sensor models are not suitable for complex traffic scene models.

 

In response to the above research status, this paper establishes a high-efficiency sensor function model that can be suitable for concurrent real-time simulation of multiple intelligent vehicles in a virtual environment. It can simulate certain physical phenomena, and effectively verifies the model function and performance, and verifies the computational efficiency under concurrent simulation conditions.

 

1. Establishment of sensor function model

1.1 Model framework

In order to establish the sensor function model in the virtual environment, this paper builds the sensor framework shown in Figure 1. The input of the model includes four parts: vehicle status, simulation scene, sensor parameters and environmental parameters. The functional modules of the model are composed of object extraction, occlusion simulation, output simulation and error simulation. Object extraction is to quickly extract the perceived object from the simulation scene according to the sensor's position, perception range and perception object type.

 

 

Occlusion simulation, according to the geometric outline of the object and its relative position to the sensor, determines the final visible object according to the graphic geometry. Output simulation, according to the characteristics of the sensor itself and the output format of the perceived object, calculates the ideal output data for the visible object to be output, and obtains the ideal output object. Error simulation, characterizes the characteristics of the sensor, adds Gaussian white noise to the output data of the ideal output object, and forms the final perception output.

 

1.2 Object Extraction

In order to meet the needs of multiple vehicle sensors in large-scale simulation scenarios, fast object extraction is the basis. The working principle of fast extraction is shown in Figure 2. The simulation space is divided into continuous square grids according to the two-dimensional plane in the inertial system. All objects must belong to one or more square grids geometrically at every moment. For each object that needs to provide information to the outside, an intercepted object is created. According to the different types, the intercepted object handles are stored in the object linked lists of different object layers in the square grid to which they belong.

 

For each module that needs to obtain information about other objects, an interception model is created. Each interception model will first be intercepted with a simple geometric outline according to the sensor range, such as the circular range shown in Figure 2. According to the square grid covered by the interception range, the interception model object can access the intercepted objects in different object layers in the square grid, thereby realizing the rapid interception of the intercepted objects. Finally, according to the geometric detection range of the sensor, the sensor target object is further extracted from the intercepted object according to the principle of geometric clipping.

 

 

1.3 Occlusion simulation

The shadow volume algorithm is a classic algorithm in the field of computer graphics for calculating the shadow effect of point light sources. Based on the shadow volume algorithm, this paper proposes a fast algorithm for judging the occlusion relationship between objects in a two-dimensional plane, and its principle is shown in Figure 3. Define the main vehicle body coordinate system V, the sensing direction coincides, the Y axis direction points to the left, the origin of the vehicle body coordinate system is located at the center of mass of the vehicle body, and the origin of the sensor coordinate system is located at the sensor installation point.

 

 

The geometric contour model of the object to be processed is defined as a planar directed bounding box, such as the rectangle A1B1C1D1 representing the object to be processed 1 in Figure 3 (a), ignoring the influence of different surface materials of the object on the perception result.

 

The visible triangle of the object to be processed is defined as a triangle formed by selecting two vertices with the largest angle S from the four vertices of the object to be processed in the sensor coordinate system, with the sensor coordinate origin S as a vertex. As shown in Figure 3 (a), the visible triangle of the object to be processed 2 is ΔA2SB2. The angle between the side of the visible triangle closest to the X-axis and the X-axis is the minimum visible angle. The side of the visible triangle farthest from the X-axis is:

 

 

Determine whether the object to be processed is occluded. As shown in Figure 3 (a), if the visible triangle of object 3 intersects with the outline of object 1, object 3 is occluded, while object 2 is not occluded. For the occluded object, we can further determine whether part of the outline of the occluded object is still visible to the sensor based on the maximum and minimum visible angles of the two objects, and traverse each object in the list of objects to be processed in turn to determine whether the geometric outline of the object currently being traversed intersects with the visible triangle of the input object.

 

If they intersect, the object currently being traversed has blocked the input object. If the sensor can perceive an object with only a partial outline visible, it is necessary to further compare the visible angles.

 

As shown in Figure 3 (b), the contour of the object to be processed 1 intersects with the visible triangle of the object to be processed 2. However, because: φ2max>φ1max>φ2min>φ1min, part of the contour of the object to be processed (the bold part in Figure 3 (b)) is still visible to the sensor, and the lidar and millimeter-wave radar can identify this part of the contour as an object.

 

Determination of the actual contour of the object perceived by the laser radar. In the laser radar modeling calculation process, the line segments representing each laser beam can be pre-calculated in the laser radar coordinate system according to the scanning angle range and horizontal angle resolution of the laser radar, and stored in order from small to large scanning angles, as shown in Figure 4.

 

 

When the maximum and minimum visible angles of the object are known to be φmax and φmin, the laser beam line segment labels with the maximum and minimum angles intersecting the visible contour of the object can be obtained according to the following relationship. The intersection of the two laser beams and the visible contour of the object is obtained, and the contour range of the object actually scanned by the laser radar is quickly obtained.

 

 

1.4 Output simulation

Output simulation is to quickly convert the object information extracted in the geodetic coordinate system into the sensor coordinate system and the vehicle coordinate system, and generate output according to the actual output of the sensor.

 

For the lidar model, the centroid position is calculated based on the contour of the perceived object, and the centroid position, speed, and yaw angle of the object in the vehicle coordinate system are calculated using kinematic relationships and coordinate transformations based on the object's angle, speed, and other information, the main vehicle's position, motion state, and the radar's installation position relative to the main vehicle.

 

For the millimeter-wave radar model, the closest point of the object relative to the radar is calculated as the reflection point based on the visible contour, and the distance, angle, and radial relative velocity are converted from the geodetic coordinate system to the radar coordinate system using kinematic relationships and coordinate transformation.

 

For the camera model, the position, width and other information of typical traffic participants such as motor vehicles and pedestrians are transformed into the vehicle coordinate system for output. For lane markings and road edges, several discrete points with equal intervals within the sensing range are directly selected, and the fitting parameters are calculated and output according to the pre-set polynomial fitting formula.

 

1.5 Noise simulation

At present, most sensor function models simulate errors by adding noise to the ideal output results. Although the noise in many sensor perception results is colored noise with a certain frequency range, and the perception results of different state variables of the same object may be correlated, most literature currently simplifies it into independent Gaussian white noise, which can meet the simulation needs in most cases. Therefore, this paper uses Gaussian white noise to simulate the noise of the sensor, and the noise level can be set according to the actual sensor.

[1] [2]
Reference address:Validation of smart car radar and camera models

Previous article:The development environment and future market space of Internet of Vehicles
Next article:BMW is developing new forms of lithium batteries, but internal combustion engines will remain the mainstream product for a short time

Latest Embedded Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号