Xinwen Xinshi丨Five major challenges that autonomous vehicle design will face
How do I transition from the car on the left to the car on the right?
There has been incredible innovation and progress in automotive technology—from LIDAR, radar, cameras, and other sensors that help vehicles “see” their driving environment, to CPUs and SoCs that provide intelligence for the massive amounts of data generated in self-driving cars. The automotive industry has made tremendous progress in developing prototypes of self-driving cars, from consumer to commercial. The big question is, what will it take to make self-driving cars mainstream?
Test vehicles lay the foundation for mass production, but the transition from those test prototypes currently on the road to the finished self-driving vehicles that can be purchased from dealers at any time is not a direct process. Test vehicles provide engineers with an environment for research, learning, and improvement; it constitutes a feedback loop in which engineers can continuously improve their systems and test new ideas in real-world conditions. Test vehicles may not be smooth sailing, but the engineering team will learn from these results and improve them to be ready for productization.
The bar for commercialized self-driving cars will be even higher. The vehicle must always work safely; this means it must not only be fail-safe, but also function when a failure occurs. This is a huge engineering challenge. Here are five issues automotive engineers will face on the road to commercialization of self-driving cars.
The standards for L3-L5 vehicles are very strict. At these levels, humans cannot be involved and there is no backup. The ISO 26262 ASIL standard requires vehicles to have new functions and performance to meet the growing safety requirements. Today, the functional safety of vehicle systems designed for failure protection is at the forefront. With the advancement of L3-L5 vehicles, the system must be able to operate when it fails; even if a failure occurs, the system needs to ensure that the function is fully operational or degraded.
Therefore, the system must be tested to handle all the “what if” situations. This means setting up multiple test scenarios that combine simulation and vehicle-level testing, including emergency braking functions that need to be in place at all times, and vehicle controls that can navigate the vehicle to the nearest safe stopping place if a vehicle failure occurs. It also needs to be determined how much data needs to be collected to train the system; this collected data helps to address all the “what if” situations.
In a dynamically changing environment, data from different sources can only be integrated or compared (for diagnostic purposes) if it is known that the data originated from the same point in time. As more and more sensors, radars, cameras, and LIDARs are distributed throughout the vehicle, it is important to understand how the various data are related to each other in real time.
The central control unit requires real-time and accurate data collection to obtain all the information needed to decide what actions should be taken with the lowest latency. Accurate time synchronization is also essential for the machine learning models required for L3-L5 autonomous vehicles. So the challenge is that the acquired data points may come from different sources input at different speeds. Take sensor fusion as an example. Compared with advanced sensor fusion data, the underlying sensor fusion data that provides detailed information about the driving environment requires more bandwidth and more complex time synchronization. Time synchronization standards also vary depending on the network protocol, which brings more complexity to automotive engineers. Ethernet TSN (Time Sensitive Network) and Ethernet AVB (Audio Video Bridging) provide a network method for automotive control systems to achieve time synchronization. GNSS is another way, especially for autonomous navigation systems.
Vehicles at level L1-L2 still require human involvement, but for vehicles at level L3-L5, cameras, radars, LIDARs, and other sensors must act as the "eyes" of the car. Therefore, they need to solve several problems, including various data from multiple sources and operating under multiple environmental conditions. Sensors and cameras must be able to detect multiple features around the vehicle with high accuracy to provide a consistent, comfortable, and safe driving experience. This includes all abnormal road conditions that human drivers will see and take action on, such as: animals that accidentally run onto the road, other vehicles that do not obey traffic rules, potholes, or pedestrians crossing the road from unexpected locations. The vision system must also work in weather conditions such as sunny days, rainy days, snowy days, and foggy days. Vision systems based on high-precision visual processing and sensor fusion technology, as well as high-performance hardware optimized for computer vision, provide the required computing power and software capabilities to support the more complex requirements required by the increasing number of sensors and cameras in cars.
Production puts pressure on costs, which drives engineering innovation. When talking about the challenges of computing power in autonomous vehicle design, we are also discussing power consumption. For L3-L5 vehicles, computing performance needs to be strengthened as much as possible while meeting the required cost requirements and lower power consumption. One way to solve this problem is through optimization. For example, software is the link that generates a lot of power, and this has a lot to do with how microprocessors, GPUs and other chips are architected. Increased functional safety requirements bring additional challenges. Fail-operation requires triple redundancy; fail-silent mechanisms also require redundant systems; and cars cannot provide unlimited power. High temperatures are also a problem because heat dissipation increases with power consumption. Dedicated hardware accelerators enable processors to meet specific application performance requirements at very low power consumption. Understanding the needs of future systems and designing hardware accelerators specifically for those needs will enable cost-effective and energy-efficient autonomous driving systems.
As the market has moved toward connected cars, the automotive industry has built an extensive expertise base around the development and deployment of active safety features. Advanced emergency braking, automatic cruise control, lane assist, cross-traffic alert, surround view systems, traffic jam assist, and automated parking: These features are becoming more commonplace for vehicles in every segment, from entry-level to premium. Automated driving systems face similar challenges as they move from testing and prototypes to mainstream products. The industry can leverage previous ADAS investments and lessons learned to address current challenges, allowing for varying degrees of autonomy and, ultimately, full autonomy.
about Us
Renesas Electronics Corporation (TSE: 6723) provides professional and trusted innovative embedded design and complete semiconductor solutions, aiming to improve people's work and lifestyle through billions of connected intelligent devices using its products. As a global leading supplier of microcontrollers, analog power devices and SoC products, Renesas Electronics provides comprehensive solutions for various applications such as automotive, industrial, home, office automation, information and communication technology, and looks forward to working with you to create an infinite future. For more information, please visit renesas.com.
Long press the QR code to follow Renesas Electronics
Click "Read original text" below to learn more exciting content!
Featured Posts