European PEGASUS pointed out that virtual simulation testing is required for level 3 and above autonomous driving systems. Virtual sensing is an important part of virtual testing. The quality of the virtual sensor model determines the degree of realism obtained in the simulation. Using environmental perception sensors in a virtual environment for simulation modeling testing can not only reproduce various road conditions and test dangerous driving conditions in a short period of time, but also break through time limits and conduct uninterrupted testing, which can greatly shorten the product testing cycle and is relatively low in cost.
When building a sensor simulation model for a smart car, it can generally be divided into physical model modeling and functional model modeling. The physical model simulates the specific physical structure and physical principles of the sensor, reflecting the physical characteristics of the sensor.
In addition, the testing and verification of smart car controllers requires a complex traffic environment, and the traffic vehicle simulation also requires a smart car model with sensors. The amount of calculation is greatly increased, and the requirements for computing efficiency are more stringent. The calculation of each sensor needs to be completed in microseconds. Most of the above two sensor models are not suitable for complex traffic scene models.
In response to the above research status, this paper establishes a high-efficiency sensor function model that can be suitable for concurrent real-time simulation of multiple intelligent vehicles in a virtual environment. It can simulate certain physical phenomena, and effectively verifies the model function and performance, and verifies the computational efficiency under concurrent simulation conditions.
1. Establishment of sensor function model
1.1 Model framework
In order to establish the sensor function model in the virtual environment, this paper builds the sensor framework shown in Figure 1. The input of the model includes four parts: vehicle status, simulation scene, sensor parameters and environmental parameters. The functional modules of the model are composed of object extraction, occlusion simulation, output simulation and error simulation. Object extraction is to quickly extract the perceived object from the simulation scene according to the sensor's position, perception range and perception object type.
Occlusion simulation, according to the geometric outline of the object and its relative position to the sensor, determines the final visible object according to the graphic geometry. Output simulation, according to the characteristics of the sensor itself and the output format of the perceived object, calculates the ideal output data for the visible object to be output, and obtains the ideal output object. Error simulation, characterizes the characteristics of the sensor, adds Gaussian white noise to the output data of the ideal output object, and forms the final perception output.
1.2 Object Extraction
In order to meet the needs of multiple vehicle sensors in large-scale simulation scenarios, fast object extraction is the basis. The working principle of fast extraction is shown in Figure 2. The simulation space is divided into continuous square grids according to the two-dimensional plane in the inertial system. All objects must belong to one or more square grids geometrically at every moment. For each object that needs to provide information to the outside, an intercepted object is created. According to the different types, the intercepted object handles are stored in the object linked lists of different object layers in the square grid to which they belong.
For each module that needs to obtain information about other objects, an interception model is created. Each interception model will first be intercepted with a simple geometric outline according to the sensor range, such as the circular range shown in Figure 2. According to the square grid covered by the interception range, the interception model object can access the intercepted objects in different object layers in the square grid, thereby realizing the rapid interception of the intercepted objects. Finally, according to the geometric detection range of the sensor, the sensor target object is further extracted from the intercepted object according to the principle of geometric clipping.
1.3 Occlusion simulation
The shadow volume algorithm is a classic algorithm in the field of computer graphics for calculating the shadow effect of point light sources. Based on the shadow volume algorithm, this paper proposes a fast algorithm for judging the occlusion relationship between objects in a two-dimensional plane, and its principle is shown in Figure 3. Define the main vehicle body coordinate system V, the sensing direction coincides, the Y axis direction points to the left, the origin of the vehicle body coordinate system is located at the center of mass of the vehicle body, and the origin of the sensor coordinate system is located at the sensor installation point.
The geometric contour model of the object to be processed is defined as a planar directed bounding box, such as the rectangle A1B1C1D1 representing the object to be processed 1 in Figure 3 (a), ignoring the influence of different surface materials of the object on the perception result.
The visible triangle of the object to be processed is defined as a triangle formed by selecting two vertices with the largest angle S from the four vertices of the object to be processed in the sensor coordinate system, with the sensor coordinate origin S as a vertex. As shown in Figure 3 (a), the visible triangle of the object to be processed 2 is ΔA2SB2. The angle between the side of the visible triangle closest to the X-axis and the X-axis is the minimum visible angle. The side of the visible triangle farthest from the X-axis is:
Determine whether the object to be processed is occluded. As shown in Figure 3 (a), if the visible triangle of object 3 intersects with the outline of object 1, object 3 is occluded, while object 2 is not occluded. For the occluded object, we can further determine whether part of the outline of the occluded object is still visible to the sensor based on the maximum and minimum visible angles of the two objects, and traverse each object in the list of objects to be processed in turn to determine whether the geometric outline of the object currently being traversed intersects with the visible triangle of the input object.
If they intersect, the object currently being traversed has blocked the input object. If the sensor can perceive an object with only a partial outline visible, it is necessary to further compare the visible angles.
As shown in Figure 3 (b), the contour of the object to be processed 1 intersects with the visible triangle of the object to be processed 2. However, because: φ2max>φ1max>φ2min>φ1min, part of the contour of the object to be processed (the bold part in Figure 3 (b)) is still visible to the sensor, and the lidar and millimeter-wave radar can identify this part of the contour as an object.
Determination of the actual contour of the object perceived by the laser radar. In the laser radar modeling calculation process, the line segments representing each laser beam can be pre-calculated in the laser radar coordinate system according to the scanning angle range and horizontal angle resolution of the laser radar, and stored in order from small to large scanning angles, as shown in Figure 4.
When the maximum and minimum visible angles of the object are known to be φmax and φmin, the laser beam line segment labels with the maximum and minimum angles intersecting the visible contour of the object can be obtained according to the following relationship. The intersection of the two laser beams and the visible contour of the object is obtained, and the contour range of the object actually scanned by the laser radar is quickly obtained.
1.4 Output simulation
Output simulation is to quickly convert the object information extracted in the geodetic coordinate system into the sensor coordinate system and the vehicle coordinate system, and generate output according to the actual output of the sensor.
For the lidar model, the centroid position is calculated based on the contour of the perceived object, and the centroid position, speed, and yaw angle of the object in the vehicle coordinate system are calculated using kinematic relationships and coordinate transformations based on the object's angle, speed, and other information, the main vehicle's position, motion state, and the radar's installation position relative to the main vehicle.
For the millimeter-wave radar model, the closest point of the object relative to the radar is calculated as the reflection point based on the visible contour, and the distance, angle, and radial relative velocity are converted from the geodetic coordinate system to the radar coordinate system using kinematic relationships and coordinate transformation.
For the camera model, the position, width and other information of typical traffic participants such as motor vehicles and pedestrians are transformed into the vehicle coordinate system for output. For lane markings and road edges, several discrete points with equal intervals within the sensing range are directly selected, and the fitting parameters are calculated and output according to the pre-set polynomial fitting formula.
1.5 Noise simulation
At present, most sensor function models simulate errors by adding noise to the ideal output results. Although the noise in many sensor perception results is colored noise with a certain frequency range, and the perception results of different state variables of the same object may be correlated, most literature currently simplifies it into independent Gaussian white noise, which can meet the simulation needs in most cases. Therefore, this paper uses Gaussian white noise to simulate the noise of the sensor, and the noise level can be set according to the actual sensor.
Previous article:The development environment and future market space of Internet of Vehicles
Next article:BMW is developing new forms of lithium batteries, but internal combustion engines will remain the mainstream product for a short time
- Popular Resources
- Popular amplifiers
- Huawei's Strategic Department Director Gai Gang: The cumulative installed base of open source Euler operating system exceeds 10 million sets
- Analysis of the application of several common contact parts in high-voltage connectors of new energy vehicles
- Wiring harness durability test and contact voltage drop test method
- Sn-doped CuO nanostructure-based ethanol gas sensor for real-time drunk driving detection in vehicles
- Design considerations for automotive battery wiring harness
- Do you know all the various motors commonly used in automotive electronics?
- What are the functions of the Internet of Vehicles? What are the uses and benefits of the Internet of Vehicles?
- Power Inverter - A critical safety system for electric vehicles
- Analysis of the information security mechanism of AUTOSAR, the automotive embedded software framework
Professor at Beihang University, dedicated to promoting microcontrollers and embedded systems for over 20 years.
- LED chemical incompatibility test to see which chemicals LEDs can be used with
- Application of ARM9 hardware coprocessor on WinCE embedded motherboard
- What are the key points for selecting rotor flowmeter?
- LM317 high power charger circuit
- A brief analysis of Embest's application and development of embedded medical devices
- Single-phase RC protection circuit
- stm32 PVD programmable voltage monitor
- Introduction and measurement of edge trigger and level trigger of 51 single chip microcomputer
- Improved design of Linux system software shell protection technology
- What to do if the ABB robot protection device stops
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- Sandia Labs develops battery failure early warning technology to detect battery failures faster
- Ranking of installed capacity of smart driving suppliers from January to September 2024: Rise of independent manufacturers and strong growth of LiDAR market
- Industry first! Xiaopeng announces P7 car chip crowdfunding is completed: upgraded to Snapdragon 8295, fluency doubled
- P22-009_Butterfly E3106 Cord Board Solution
- May I ask if Mr. Yi Zhongtian mentioned in a certain program that he talked about "have you learned it?", "have you learned it thoroughly?", "have you learned it thoroughly?", in which program did he say this?
- Interpretation of how dust and water resistance testing of lamps is applied in international standards
- PCB file conversion
- 5G indoor base stations will be released soon, and it is expected that each household will have a small base station, which will subvert the existing home Internet access methods.
- How to consider the impact of audio analog signals after being transmitted through a 100m long cable?
- Angle sensor
- Transistor Selection
- NUCLEO_G431RB Review - RTC Real-time Clock
- 【New Year's Taste Competition】+ Spring Festival Customs
- ESD electrostatic protection classic book - "ESD Secrets: Electrostatic Protection Principles and Typical Applications"