A long-thought-out trade-off
How to simply judge the imaging effect of a blind-filling lidar?
The industry usually has several indicators: field of view, ranging range, and angular resolution.
An excellent blind-filling lidar, in layman's terms: must see both widely and clearly.
In the last issue, we discussed the topic of how blind-filling lidar can see a wide enough field (Five Questions to Fill Blindness (1) | How large is the field of view of lidar to fill blindness?). In this issue, we talk about another key to blind-filling lidar. The problem - how to "see clearly enough", that is, how to define angular resolution.
Blind-filling radar angular resolution: the long-considered trade-off
What is lidar angular resolution
If lidar is compared to the human eye, the field of view is equivalent to the width of the human eye's field of view, and resolution is one of the most important factors that determine the human eye's ability to recognize image details.
Resolution in the field of lidar corresponds to angular resolution, which refers to the angular interval between two adjacent laser scanning points, generally in degrees (°). Since there are currently many scanning methods for lidar, the differences in the distribution of scanning points of each method result in the scanning points not being absolutely uniform. Therefore, the angular resolution of lidar mentioned here is an equivalent average concept.
Intuitively understanding, the smaller the angular resolution, the more laser points distributed within the unit space angle, and the stronger its ability to resolve objects will be. Under the same angular resolution, for the same object, the farther the distance, the fewer the number of detected laser points, as shown in the figure.
For blind lidar, the angular resolution cannot be too small
The definition of angular resolution of lidar is very particular.
Compared with long-range forward lidar, the resolution definition of short-range blind-filling lidar is more complex and requires more factors to be considered.
First, it is necessary to determine the sensing range that blind-filling lidar needs to cover.
At this stage, the detection distance range of blind-filling lidar is generally 25 to 30 meters (@10% Lambertian reflection, which means that the incident energy is reflected uniformly in all directions).
When the required sensing distance reaches 20m (※), if you want to output high-confidence sensing results for typical targets, it is roughly estimated that the average angular resolution needs to reach 0.5°.
※ As shown in the table above, for typical targets (cars, pedestrians and cyclists), even taking into account the particularity of MEMS scanning, there are enough points at a distance of 20 meters to support the algorithm to accurately output high confidence degree of target-level perception outcomes.
Secondly, the application scenario of blind-filling radar not only requires it to detect objects at close range, but also has certain requirements for the details of the contour of the object being measured.
For example, in automatic parking scenarios, blind-filling radar needs to detect various types of unstructured special-shaped objects to supplement the insufficient recognition capabilities of current cameras and ultrasonic radars to achieve safe and fast parking.
So, can we simply and roughly think that the smaller the angular resolution, the better? Can perfect results be achieved by directly setting the angular resolution to 0.1° or even less?
the answer is negative.
The smaller the angular resolution, the better. A resolution that is too small will have a negative impact on the application of lidar.
When we take pictures with the main camera on our mobile phone that claims to be 100 million pixels, there is a high probability that the resolution of the output photo will be only a few million, or at most 10 million pixels. The reason is that although 100 million pixels improves the details of the photo, it also sacrifices the size of the pixels, resulting in poor photosensitive performance of the image.
Additionally, too many pixels take up storage space. In fact, in most cases, tens of millions of pixels can already provide excellent sensory effects. After all, there are very few applications that require magnifying images several times to see fine details.
The same considerations also apply to lidar, especially for blind lidar, the angular resolution cannot be too small.
The field of view of blind-filling lidar is very large, especially the vertical field of view is usually three times that of long-range lidar. If the angular resolution is set very small at this time, the amount of data in a single frame of blind-filling lidar will be too large, putting pressure on the computing power of the domain control.
Comparison of lidar single frame data volume
At the same time, in an autonomous vehicle, the number of blind-filling lidars required is usually more than that of long-range radars. Excessive data volume in a single frame will put greater pressure on the computing power of domain control.
Some people may say that the data volume of lidar is dwarfed by the data volume of current vehicle cameras, and is nothing to the increasingly larger computing power platforms.
Indeed, lidar does not occupy much computing power compared to the total computing power of the embedded neural network processor (NPU). But for the current mainstream computing chips, the central processing unit (CPU) is the bottleneck; before the lidar data enters the network, it needs to call the CPU to perform necessary data analysis, noise filtering and other operations, not to mention concurrency. The huge amount of lidar data puts pressure on the automotive Ethernet network.
Where is the balance?
On the one hand, the angular resolution is not small enough to meet the detection requirements of object details; on the other hand, the angular resolution is too small, which will put pressure on domain control computing power. Where is the balance?
In fact, within the sensing range of short-range lidar, a slightly higher angular resolution can ensure sufficient object detail perception, and there is no need to be overly demanding on angular resolution.
Take Yipath's first short-range lidar ML-30s as an example. The average angular resolution of this product is 0.44° × 0.44°. In the past three years, the actual implementation of many typical scenes has fully proved that such an angular resolution combination can achieve excellent imaging details in the near-car body area.
The picture above is the test result of ML-30s in a typical scene. Drink bottles, small bricks and distant road pillars can be clearly distinguished. More importantly, the slope changes of the roadside can also be clearly distinguished from the point cloud image.
Summary of ML-30s actual detection capabilities
In addition, in order to achieve the above-mentioned small object resolution capability, in addition to sufficient angular resolution, the accuracy of ranging is also key. If the ranging accuracy is not high, it will be impossible to form a clearer point cloud outline, and it will not be possible to accurately detect the above-mentioned subtle slopes.
More detection scenarios are shown in the figure below:
As can be seen from the above, appropriate angular resolution allows blind-filling lidar to detect sufficiently dense point clouds, greatly enhancing the perception of areas near the vehicle body, thus providing a solution for realizing various difficult and complex scenarios near the vehicle body. possible.
However, appropriate angular resolution can only ensure that you can see the local part clearly. To see the whole picture clearly, full-field consistency is a performance that cannot be ignored.
Consistency across the entire field of view: undifferentiated perception
While blind-filling lidar pursues a large field of view and high resolution, it is often easy to overlook a very important dimension: undifferentiated large field of view perception capabilities.
The key to achieving undifferentiated perception is the consistency of ranging capabilities within the field of view, especially the consistency of ranging in the horizontal field of view.
Since it is difficult to reflect in product specifications, ranging consistency is easily overlooked. However, for blind-filling radar, which focuses on vehicle near-field detection, its main task is to have reliable and undifferentiated perception of the environment within the range covered by the field of view.
Because from the perspective of application scenarios, it is impossible to strictly distinguish which direction needs to be seen far and which places need to be seen close. Even, the closer the field of view is to the edge, the higher the requirements for the detection capability of the blind-filling radar.
Let's look at 2 typical scenarios.
In the scene of a vehicle turning left, compared with the definition of primary and secondary fields of view, consistent full-field ranging capabilities can better ensure that oncoming vehicles can be perceived indiscriminately in the entire field of view during the entire U-turn process, thereby making better safety choices. Comfortable driving strategy; accordingly, if the ranging of an individual field of view is significantly shorter, it will cause target tracking to be intermittent or the target to appear suddenly, which are results that the intelligent driving system does not want to see.
In the scene where the vehicle turns right, the speed of the vulnerable traffic participant is relatively low, but because the potential collision scene poses a greater threat of injury to people, such a scene has more stringent detection requirements. In this case, the undifferentiated detection capability within the field of view can ensure that the lidar can continuously and stably detect cyclists and pedestrians, ensuring the safety of blind spots to the greatest extent.
The necessity of consistent ranging across the entire field of view has been made clear, but there are certain challenges in practical applications. This is because:
Previous article:For blind-filling lidar, is the smaller the angular resolution really, the better?
Next article:Can 4D millimeter wave radar become a game-changer for autonomous driving?
- Popular Resources
- Popular amplifiers
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- Rambus Launches Industry's First HBM 4 Controller IP: What Are the Technical Details Behind It?
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- EEWORLD University Hall----mmWave sensors improve artificial intelligence algorithms to use elevators more efficiently
- Nuvoton Solution Sharing: IoT Development Based on NUC472 Development Board and Connected to Gizwits
- [Project Source Code] Altera SOPC FrameBuffer System Design Tutorial
- What is FRAM?
- Can 1N4007 be used to isolate 220V mains?
- W5500 TCP Client Network Debug Assistant cannot connect
- Analog circuits are much more difficult to learn than digital circuits. China’s real weak spot is actually analog chips.
- DIY a server for 5 dollars?
- Based on PSOC6 development board simulation I2C solution X-NUCLEO-IKS01A3 LSM6DSO
- Ripple test