Preface: 2020 is known as the "first year" of mass production of LiDAR. Since July last year, RS-LiDAR-M1 has received a large number of orders for designated models from automakers, including L3 heavy-duty truck solution technology companies, North American new energy vehicle companies, Chinese car-making "new strength and new power" car companies, traditional OEMs, top supercar brands, etc., covering models from supercars to family cars, from passenger cars to commercial vehicles.
M1 is known as an automotive-grade intelligent solid-state LiDAR. Since the first quarter of this year, some of the leading technologies and progress of M1 automotive-grade have been gradually announced:
LiDAR sensing corner cases: Corner cases such as high-reflection ghosting, high-reflection expansion, near-field absorption points, near-field voids, sunlight interference, and multi-radar interference have been successfully resolved;
Passed the automotive reliability verification: including but not limited to vibration, shock, EMC electromagnetic compatibility, chemical corrosion resistance, salt spray, high and low temperature humidity and heat verification;
Equipped with complete supporting functions: including OTA upgrade, rain, snow, dust and noise detection & filtering, stain detection, smart cleaning, smart heating, performance testing, power management, network management and other functions;
High-standard functional safety design: fully meets the qualitative and quantitative functional safety goals of SIL-2 and ASIL-B levels ;
Automotive-grade mass production line completed: China's first automotive-grade solid-state LiDAR mass production line was completed in March.
Today, we unveil the "secret weapon" of the M1's intelligence: hardware intelligence + software intelligence, namely the intelligent "gaze Gaze" function in hardware and the intelligent target-level perception function in software.
Hardware intelligence: Smart “gaze” function that can zoom
Automatic zoom technology is one of the leapfrogging technologies for realizing the intelligence of computer vision and various imaging systems. By changing the focal length of the camera, it can quickly play the role of several fixed-focal-length lenses, providing convenient conditions for environmental perception.
This revolutionary intelligent zoom technology reduces the stacking of devices for long and short cameras, and is very suitable for quickly capturing long-distance and close-up images and moving objects, thus becoming a commonly used "weapon" for documentary shooting.
● Integrate into the smart car ecosystem and make LiDAR hardware intelligent
In daily driving, drivers need to carefully handle special road conditions in different scenarios and always pay attention to different road areas: when driving on the highway, pay attention to dynamic vehicles and small static obstacles in the distance; when passing street intersections, be alert to surrounding pedestrians and two-wheeled vehicles; when passing through congested sections, be wary of surrounding vehicles forcing their way in.
Intelligent driving also needs to face a variety of different driving scenarios, which requires that the performance of lidar hardware needs to be optimized based on environmental perception under different road conditions.
Taking into account hardware performance, efficiency and cost, we have upgraded the LiDAR hardware to achieve camera-like zoom technology and achieved efficient application results. We call this intelligent function "GAZE", which means focusing on the key perception area that the driver is concerned about at any time.
Hardware intelligence is part of RoboSense's layout in the Smart LiDAR Sensor System, and the "Gaze" function version of M1 also won the CES 2019 Innovation Award.
Currently, RoboSense has completed batch prototype verification of the "Gaze" function with designated customers. Since June, the "Gaze" function will be officially launched in the new version of RS-LiDAR-M1.
● “Intelligence” is the genetic advantage of 2D MEMS chip scanning
Traditional LiDARs, including the one-dimensional mechanical rotating LiDARs and one-dimensional rotating mirror LiDARs used in autonomous driving test vehicles, all use a one-dimensional motor scanning architecture. The former uses a motor to carry all laser transceiver components to rotate and scan together, while the latter uses a motor to carry the laser reflector to rotate and scan.
△One-dimensional mechanical scanning solution structure
△One-dimensional rotating mirror scanning solution structure
This type of one-dimensional motor scanning architecture has been used for more than ten years. The laser transceiver unit is completely fixed before leaving the factory, resulting in the scanning beam distribution and maximum frame rate being fixed at the factory.
△One-dimensional motor scanning solution line number distribution is fixed, scanning frame rate is fixed
The new generation of automotive-grade intelligent solid-state laser radar RS-LiDAR-M1 adopts RoboSense's self-developed two-dimensional MEMS intelligent scanning chip, which can arbitrarily change the horizontal and vertical scanning speeds to change the scanning pattern, and the switching can be completed in the next frame after receiving the command. This will bring two major changes to the performance and application of laser radar.
△2D MEMS chip intelligent scanning solution
1. From the rough concept of line count to precise ROI area resolution
The long-distance obstacles that smart driving is concerned about are all distributed in the ROI (Region of Interest) area in the middle of the lidar field of view, so what really needs to be improved is the lidar's ability to perceive long-distance obstacles, and the core focus is to improve the resolution of the middle ROI area.
At the beginning of the industry, the number of lines in one-dimensional mechanical scanning lidar was evenly distributed, so the number of lines could intuitively reflect the perception capability.
△One-dimensional mechanical scanning, evenly distributed lines
With the application of automotive LiDAR, a consensus needs to be reached on the ROI area with a denser number of scanning lines. The one-dimensional mechanical scanning solution obtains a ROI area with a fixed angle and fixed resolution in the field of view by stacking fixed laser components in the middle area.
△ One-dimensional mechanical scanning, fixed angle, fixed resolution ROI area
When two-dimensional MEMS intelligent scanning appears, the line number distribution can be changed arbitrarily, and the lidar can freely adjust the angle range and resolution size of the ROI area based on different driving scenarios.
△ Two-dimensional MEMS smart chip scanning, the ROI area angle range and resolution size can be freely adjusted
The M1 "Gaze" function can dynamically adjust the ROI area; at the same time, it can dynamically adjust the resolution, doubling the resolution within the ROI area again and again to achieve the equivalent of a one-dimensional scan of hundreds or thousands of lines of perception; the dual dynamic adjustment function can be freely configured to avoid wasting computing resources.
△ Two-dimensional MEMS chip intelligent scanning solution, with adjustable resolution
2. From a locked fixed frame rate to a flexible real-time adjustable frame rate
In different driving scenarios, the intelligent driving system has different requirements for the environmental perception frame rate. In urban street driving scenarios, the surrounding obstacles are close and the driving response distance is short, so the frame rate needs to be increased to obtain a longer response time. In high-speed scenarios, the obstacles are far away, so the resolution needs to be increased rather than the frame rate to increase the detection distance and obtain a longer response time.
The one-dimensional scanning solution cannot change the frame rate while working. The frame rate is locked at the gear selected when the power is turned on. It cannot change with the adjustment of the perception system frame rate (including the camera frame rate). Not only can it not provide more scenario-based environmental data, but it also leads to synchronization difficulties in aligning the frame numbers during multi-sensor fusion.
△ Two-dimensional MEMS chip intelligent scanning solution, frame rate adjustable
The 2D MEMS smart scanning solution can dynamically increase or decrease the frame rate when the LiDAR is working, and the frame rate value can be continuous, without gear limit. This allows the LiDAR frame rate to change with the driving scene and the perception system frame rate requirements, and can also be proportionally matched with the camera frame rate to maintain synchronous triggering.
● M1's "gaze" function promotes the user's driving experience from safety to comfort
1. Every improvement in vertical resolution in high-speed scenes can bring a leap forward in user experience
On the highway, vehicles travel at high speeds and at long distances, so the intelligent driving perception system pays more attention to vehicles in the distance ahead and static obstacles such as warning triangles, ice cream cones, fallen tires, and fallen branches. Relatively speaking, point cloud data with too high a resolution distributed in non-important areas such as the ground and the sky becomes a burden on computing power.
△ Highway scene, the driver focuses on the ROI area in front
In order for the intelligent driving system to realize the HWP (Highway Pilot) function on the highway, it is necessary to obtain a longer effective detection distance for the above-mentioned obstacles, which requires the lidar to have a high ranging capability and high effective resolution (that is, high resolution within the ROI area where the obstacle is located).
In HWP (Highway Pilot) mode, turn on the M1 "Gaze" function to intelligently improve the vertical resolution of the ROI area, allowing intelligent driving to achieve a leap from safety to comfort. With "Gaze" turned on, the vertical resolution of the ROI area in the middle of the M1 field of view can be dynamically increased from 0.2° to 0.1° (or even higher resolution), the obstacle point cloud imaging density doubles, and the height of small objects in front can be accurately measured, helping the planning layer to judge the passability based on the wheel and chassis height.
Previous article:Coming soon! A detailed explanation of the Xpeng P5 and its LiDAR
Next article:Renault, Arrival and STMicroelectronics reach chip supply agreement
- Popular Resources
- Popular amplifiers
- Industry first! Xiaopeng announces P7 car chip crowdfunding is completed: upgraded to Snapdragon 8295, fluency doubled
- P22-009_Butterfly E3106 Cord Board Solution
- A new chapter in Great Wall Motors R&D: solid-state battery technology leads the future
- Naxin Micro provides full-scenario GaN driver IC solutions
- Interpreting Huawei’s new solid-state battery patent, will it challenge CATL in 2030?
- Are pure electric/plug-in hybrid vehicles going crazy? A Chinese company has launched the world's first -40℃ dischargeable hybrid battery that is not afraid of cold
- How much do you know about intelligent driving domain control: low-end and mid-end models are accelerating their introduction, with integrated driving and parking solutions accounting for the majority
- Foresight Launches Six Advanced Stereo Sensor Suite to Revolutionize Industrial and Automotive 3D Perception
- OPTIMA launches new ORANGETOP QH6 lithium battery to adapt to extreme temperature conditions
- LED chemical incompatibility test to see which chemicals LEDs can be used with
- Application of ARM9 hardware coprocessor on WinCE embedded motherboard
- What are the key points for selecting rotor flowmeter?
- LM317 high power charger circuit
- A brief analysis of Embest's application and development of embedded medical devices
- Single-phase RC protection circuit
- stm32 PVD programmable voltage monitor
- Introduction and measurement of edge trigger and level trigger of 51 single chip microcomputer
- Improved design of Linux system software shell protection technology
- What to do if the ABB robot protection device stops
- Industry first! Xiaopeng announces P7 car chip crowdfunding is completed: upgraded to Snapdragon 8295, fluency doubled
- P22-009_Butterfly E3106 Cord Board Solution
- Keysight Technologies Helps Samsung Electronics Successfully Validate FiRa® 2.0 Safe Distance Measurement Test Case
- Innovation is not limited to Meizhi, Welling will appear at the 2024 China Home Appliance Technology Conference
- Innovation is not limited to Meizhi, Welling will appear at the 2024 China Home Appliance Technology Conference
- Huawei's Strategic Department Director Gai Gang: The cumulative installed base of open source Euler operating system exceeds 10 million sets
- Download from the Internet--ARM Getting Started Notes
- Learn ARM development(22)
- Learn ARM development(21)
- Learn ARM development(20)
- Why does the newly soldered STM32 main control board run about 10 times slower than normal after programming?
- [Discussion] There is no place to make money during the seven days of National Day holiday
- Linux-3.14.52 Compiler Reference Manual v2.0
- ESP32 firmware is divided into two versions: ESP-IDF v3.x/4.x
- Two methods of programming TMS320C6748
- Enabling higher-performance front-end radar to make Vision Zero a reality
- Power supply control principle of three-phase fuel pump used in Mercedes-Benz passenger cars
- Review summary: Free review of Fudan Micro FM33LG0 series, Winsilver chip
- ESP32-S2-Saola-1 calculates pi
- DCDC H-bridge circuit, what is the principle of boost and buck?