4D Imaging Radar: A "Catfish" Entering the Field of L2+ Autonomous Driving (with Technical White Paper Download)
Ultra-high-resolution radar mapping has proven to be an attractive, cost-effective alternative to lidar. The advent of commercially viable 4D imaging radar technology will have a significant impact on the ADAS sensor portfolio deployed for L2+ and higher-level vehicles. It is like a catfish entering the field of autonomous driving, which will inevitably stir up the entire technology and market landscape.
The advent of 4D imaging radar technology for automotive sensing and autonomous driving applications has changed the timeline and economic value of our evolution from L0 cars to fully automated L5 cars. Radar now has new capabilities that can achieve accurate environmental mapping, which will significantly enhance the overall sensing and perception capabilities of the car.
Compared with camera sensors and lidar sensors, the automotive industry has high expectations for the future role of radar
.
Imaging radar closes the gap with lidar in a range of performance and reliability metrics, and even surpasses it in some metrics, and its advantages in commercial cost structure are also something that lidar will never be able to achieve. As these sensor technologies begin to overlap in functionality, we must conduct a detailed assessment of their respective roles and costs.
At the same time, at the critical juncture of the automotive industry's upgrade from L2 to L3 safety and automation, some key questions about the timing and duration of the upgrade have emerged. L2+ has become a new hot battlefield, and OEMs are working hard to solve the complex design problems that must be solved to reach L3.
The associated expenses of full L3 autonomous driving are still considerable, mainly because of the system redundancy required when the driver does not need to be on standby. In fact, L2+ autonomous driving has attracted a lot of attention and has grown strongly because it reduces the additional demand for redundancy by arranging for the driver to be on standby while providing L3 functions.
NXP's " Imaging Radar Online Technology Day" event is in full swing. 7 selected technical lectures will fully demonstrate NXP's scalable imaging radar sensor portfolio and ecosystem partner products. Join now>>
Complexity Level
SAE divides autonomous driving into five levels, and it is very helpful to evaluate the potential impact of 4D imaging radar on ADAS and AD applications, especially the huge difference between L2 and L3. In L2 cars, the driver needs to pay attention at all times; the driver is ultimately responsible for the safety of the car and they are responsible in the event of an accident. But starting from L3, the in-vehicle safety automation functions will become powerful enough that the safety responsibility will be borne by the car OEM.
There are also important differences between L3 and L4/L5. At L3, driver intervention is still required in certain situations, while at L4/L5, the driver can only intervene if requested, and in some L5 use cases, the driver cannot even intervene. L4 and L5 cars must at least be able to drive the car to the stop in all circumstances without human intervention.
These levels of automated driving impose new system redundancy requirements as more driving responsibility is borne by the car. At Level 3, the driver must be able to take over the car in challenging traffic conditions, while being "eyes and hands free" in other situations.
In these scenarios, it could take up to a minute for the driver to gradually take over full control of the vehicle, and the level of redundancy required for this functionality - safely transferring control of the vehicle from the car to the driver - would significantly increase system complexity and cost.
Therefore, the number and configuration of cameras, radars, and lidar sensors required for each vehicle to achieve L3 performance, as well as the difference from the typical L2 sensor configuration, will have a significant impact on OEM manufacturing costs.
Figure 1: Advanced driver assistance systems and levels of automated driving
This helps explain why the L2+ level has emerged, with the goal of helping OEMs minimize the cost increases associated with L2 while beginning to offer customers advanced ADAS features that are close to L3 without completely crossing the line to L3, which would shift responsibility from the driver to the OEM. The L2+ level can take full advantage of the sensors and semiconductor components associated with L3, keeping manufacturing costs at the L2+ level while avoiding the additional costs of implementing system redundancies that are necessary to transfer control from the car to the driver at L3. At the same time, on the road to L3, L4, and L5, OEMs strive to achieve market differentiation, and in the next few years, many OEMs will launch new safety and comfort features that will benefit consumers.
L2+: The next key battlefield
These new safety and comfort features are concentrated at the L2+ level and are priced to be acceptable to consumers. The central question is whether consumers are willing to pay a higher price for more system redundancy required to meet the L3 standard.
For OEMs, L2+ allows them to avoid the significant costs required to address L3 redundancy issues and edge cases, which would reduce the competitiveness of the car in the market. L2+ also allows OEMs to gradually introduce advanced safety and comfort features, leaving more time for sensor technology to mature and be adopted commercially at higher levels of autonomous driving. At this transition level, the driver can continue to provide the necessary redundancy, and OEMs can achieve a better balance between comfort features and costs.
As they move closer to Level 3, OEMs must carefully consider the following important questions: If the cost burden of achieving Level 3 system redundancy is similar to the expected cost burden of Level 4, then why stop at Level 3? Are customers willing to pay a higher price for Level 3 safety system redundancy if they still need to focus on driving? While OEMs may not reach a consensus on these issues, it is reasonable to assume that the production of Level 2+ vehicles will be much higher than that of Level 3 vehicles in the next few years .
A recent Yole Development report indicates that the market penetration of L4/L5 vehicles will remain in the single digits until at least 2030, with some of these vehicles being used as robot cars. Meanwhile, as the market penetration of L0-L2 vehicles begins to decline, the adoption of L2+ vehicles will continue to grow steadily, and by 2030, L2+ vehicles are likely to reach nearly 50% of the market share. Therefore, it is expected that L2+ vehicles will become the focus of automotive OEMs in the next decade.
Figure 2: Autonomous vehicle market penetration forecast (2021-2030)
Three sensors, no single solution is perfect
We conduct a higher-level analysis of the three main sensing technologies for ADAS and AD (camera, radar, and lidar) to fully understand the advantages of 4D imaging radar for L2+ cars. Ultimately, we find that there is no one-size-fits-all solution, and each of the three solutions has its own strengths and weaknesses, and can complement each other and provide redundancy for other sensor types.
Of course, cameras and radar sensors are already widely deployed today because the two technologies are mature, economical, complementary, and affordable. LiDAR sensors are not complementary to cameras and radar sensors in terms of functionality and can serve as a redundancy to both.
The combination of camera sensors’ ability to detect RGB color information and provide megapixel resolution makes them indispensable for “reading” traffic signs and other applications, while also improving the accuracy of object recognition and classification.
However, the efficiency and reliability of camera technology are severely affected by various lighting conditions, as well as adverse weather and road conditions. There are also some new technologies on the market that can automatically remove moisture and dust from automotive camera lenses, but these mechanisms increase material costs and introduce mechanical vulnerabilities that affect system stability.
The ability of cameras to measure distance and speed will still be very limited. Of course, we can get speed and depth estimates from stereo camera configurations, but the accuracy is limited and this shortcoming needs to be compensated by the radar layer.
LiDAR: Providing performance advantages to handle extreme situations
The main differentiating features of LiDAR are ultra-precise angular resolution down to 0.1 degrees, both horizontally and vertically, and high resolution of distance measurements, thanks to its use of extremely short wavelengths and pulses. These advantages make LiDAR very suitable for high-resolution 3D environment mapping, enabling precise detection of spaces, boundaries and the car's own positioning.
However, lidar has some of the same disadvantages as camera sensors. Compared with radar sensors, lidar's ability to estimate speed and detect objects at a distance is very limited. In addition, lidar is susceptible to adverse weather and road conditions, and will incur higher costs to deal with stability and maintenance challenges.
In the past few years, several new types of LiDAR have appeared on the market, such as solid-state LiDAR, MEMS LiDAR or electronic scanning LiDAR. These new technologies aim to make LiDAR more "friendly" for automotive applications, including in terms of size, cost and stability. Compared with mechanical rotating LiDAR, these new technologies have made great improvements, but overall, they still need some time to catch up with the maturity of other ADAS sensors.
The biggest barrier to widespread adoption of lidar in mainstream passenger cars is still cost. According to recent OEM estimates, in 2021, the cost of lidar for small-scale applications is about ten times that of 12-TX and 16-RX imaging radars with four cascaded radar transceivers. Although the cost of both lidar and radar will decrease over time, it is expected that by 2030, even if lidar is used at a certain scale in advanced automation applications, its cost will still be twice that of radar.
Looking ahead, LiDAR will continue to offer performance advantages to handle edge cases that arise in complex driving scenarios, so it will remain an important part of the redundancy required for L4 and L5 autonomous driving, provided the price is acceptable.
Read the full white paper
This article is excerpted from NXP’s latest white paper “4D Imaging Radar: A Sensor Ideal for L2+ Autonomous Vehicles” . Based on the previous in-depth interpretation of the current status and development trends in the field of autonomous driving, the following chapters will provide a more detailed analysis of the advantages of 4D imaging radar and its application in a series of extremely challenging scenarios.
The complete chapters of the Chinese version of this white paper are as follows. You are welcome to download:
-
Complexity Level
-
L2+ — The Next Key Battlefield
-
Three sensors, no single solution is perfect
-
LiDAR: Providing performance advantages to handle extreme situations
-
4D imaging: the next leap in radar
-
Challenging use cases
-
in conclusion
▲The author of this article is HUANYU GU, Senior Manager of ADAS Product Marketing and Business Development Department of NXP Semiconductors