What role do LiDAR sensors and adaptive computing play in the automotive industry?

Publisher:MagicGardenLatest update time:2024-11-12 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

The field of autonomous driving technology is advancing rapidly. The number of highly automated vehicles shipped annually is expected to grow at a compound annual growth rate of 41% from 2024 to 2030. This rapid growth has led to unprecedented demand from automotive brands for precise and reliable sensor technologies that can provide accurate, reliable and ultimately fully autonomous driving solutions.


To achieve this goal, LiDAR sensors have become an indispensable tool for automakers and automotive equipment suppliers. They are able to "read the road" by providing enough resolution for depth perception and distance detection to classify objects.


However, as we move into the next generation of autonomous solutions—from the latest innovations in active safety systems to driverless vehicles—the capabilities of edge systems such as LiDAR must expand to provide greater depth resolution and reliability to address increasingly complex scenarios.


The integration of adaptive computing technologies such as FPGAs and adaptive SoCs enables companies to build fully perceptual platforms that can navigate complex driving environments and identify potential hazards with extreme accuracy.


LiDAR system architecture types


When looking at LiDAR systems, there are three main types of architectures: mechanical (non-solid state), MEMS (semi-solid state), and flash (solid state). Each architecture has its pros and cons depending on the application scenario.


The mechanical system is the most widely deployed system (see Table 1). This system uses a rotating transmitter to send light waves, which are reflected from the object and received by the receiver. The transmitter rotates at a very fast speed to achieve a 360-degree field of view, i.e., a point cloud. The advantages of this system are long detection range and wide field of view, but the disadvantages are large size and high cost.


Table 1: Mechanical (non-solid state)


picture


MEMS-based lidars use a system of emitters and mirrors to deflect light, replacing large mechanical spinning lidars (see Table 2). They are currently used in autonomous driving applications. They are smaller, lighter, and more cost-effective, but they have a relatively limited field of view and are susceptible to shock and vibration.


Table 2: MEMS type (semi-solid state)


picture


Flash systems are solid-state systems that include optical phased array (OPA) systems that use an array of optical antennas to radiate light at different angles, as shown in Table 3. This newer solution also has a limited field of view, so multiple units must be installed to cover a full 360 degrees.


Table 3: Flash type (solid state)


picture


Companies like AMD can provide FPGAs and adaptive computing devices to enable these LiDAR systems and applications. Regardless of the technology used, FPGAs and adaptive computing devices can meet the various implementation needs in the LiDAR field in terms of size, cost and resolution.


Overcoming Timing Jitter


The value of LiDAR lies in its ability to provide image classification, segmentation, and object detection data, which are essential for 3D visual perception enhanced by artificial intelligence (AI). Cameras alone cannot provide such accurate data, especially in bad weather or low light conditions, which is why LiDAR has become an indispensable technology for autonomous driving.


However, LiDAR still has to overcome multiple challenges, including timing jitter. When the timing or position of the laser pulses fluctuates, it affects the quality of the generated image, which in turn hinders object recognition and depth resolution. As the role of LiDAR in autonomous driving continues to expand, continued improvement of the technology is also critical.


Adaptive computing technology can support reduced timing jitter and improved resolution, thanks to FPGAs that enable faster data processing. FPGAs provide the flexibility to optimize data paths and memory hierarchies to reduce latency and offload AI engines that adjust pulse timing to minimize fluctuations. Ultimately, the smaller the jitter value, the higher the accuracy of sensor and radar detection and recognition of objects.


Evolution and expansion of LiDAR architecture


Currently, many vehicles in production may only be equipped with a forward-looking LiDAR. But this is changing, as the next generation of vehicles will be equipped with multiple systems, including forward, rear, and side-view LiDAR, to provide more comprehensive coverage of the road and its surroundings. This expanded LiDAR sensor ecosystem requires powerful and efficient AI computing platforms. These platforms will process and transmit the large amounts of data generated and enable the high-speed connectivity and low latency required for the ecosystem to operate effectively.


The use of FPGA-based multi-processor systems-on-chip (MPSoC) can reduce the size of these LiDAR devices. Because FPGAs are optimized for edge computing, they can be seamlessly integrated and efficiently integrated with multiple systems to cope with the surge in the number of sensors in today's autonomous driving solutions. By reducing system size and space, MPSoC enables multiple LiDARs to work together to generate a comprehensive view of the vehicle's path.


In addition, because FPGA-based MPSoCs have the flexibility to be reprogrammed after manufacturing, they can be used for multiple LiDAR systems, including future systems. This adaptability enables automakers to reduce system costs and ensure future compatibility of the design, eliminating the need to completely transform the original system when the next-generation solution emerges.


Point cloud preprocessing and machine learning acceleration


Point cloud images are at the heart of autonomous driving technology because they create images by combining individual measurements of an object’s shape. In some cases, companies are using up to 128 channels of digital multi-beam flash LiDAR to generate these rich point cloud images. This requires powerful hardware that is optimized for the task and has the ability to provide image and digital signal processing.


For example, high-speed connectivity and data transfer can be achieved by transmitting image data through high-speed serial transceivers in the programmable logic (PL). While parallel processing, clock speed reduction, and power dispersal can be achieved, companies must also use high-bandwidth connections between the processing system and the PL to divide between software and related hardware acceleration functions.


Ultimately, this will produce a point cloud image containing depth, signal and environmental data in an already simplified sensor architecture. This can unlock more efficient signal processing and the high resolution required for LiDAR to achieve reliable object detection, high-precision 3D mapping and a minimum range of zero centimeters when the vehicle is operating in confined environments.


Pre-research for current and future sensor technologies


As sensor detection technologies such as LiDAR become an integral part of autonomous driving technology, a processing platform that is both powerful and efficient is essential to achieve the deep resolution required for safety-critical functions. Adaptive computing combines AI engines and FPGAs to optimize the object detection and data conditioning required to achieve this goal to provide accurate and reliable performance.


The LiDAR ecosystem will become more complete as next-generation solutions are created and evolve to become an integral part of the autonomous driving experience. The flexibility enabled by adaptive computing can drive the required evolution and innovation as additional workloads are deployed over the vehicle lifecycle.


For example, it can enable in-field software and hardware upgrades to provide the processing power and low latency required for LiDAR to achieve end-point detection quality. Or it can ensure that new innovative features and algorithms can be remotely and securely deployed to support future-proof designs.


The computing power required to achieve the sensor detection and depth resolution expected in today's and tomorrow's automotive application scenarios requires flexibility, powerful processing power, and integration capabilities. In addition, it also requires modularity to minimize design complexity and cost while maximizing accuracy and reliability. Incorporating adaptive computing into LiDAR systems and how they are integrated can unlock the deployment scale required to support fully autonomous driving.


Reference address:What role do LiDAR sensors and adaptive computing play in the automotive industry?

Previous article:Nanochip Micro-Integrated Current Sensor NSM211x: From Industry to Automotive
Next article:Murata launches high-precision 6-axis inertial sensor for automobiles

Latest Automotive Electronics Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号