A few days ago, the share price of Luminar Technologies (stock code LAZR), a lidar supplier that completed a reverse merger and listing, was like taking a strong drug, soaring day by day, with a cumulative increase of 114% in three days.
You may be unfamiliar with Luminar. It is the second Silicon Valley company to complete its listing and is committed to providing key sensors for self-driving cars; its big brother is known as Velodyne Lidar Inc.
As for which of the two is better in lidar technology, I am really not interested. What I care about is why lidar companies are so favored by the market in the early stages of their listing.
What are the real capabilities of the powerful laser radar?
Investors in the stock market exude a shrewd aura from every pore from head to toe (except for small investors); the companies that allow them to speculate heavily naturally have good development prospects and their moats are naturally unusually wide.
The laser radar used in vehicle autonomous driving obviously has broad prospects. To discuss whether the company's moat is wide enough, we have to start with the characteristics of the laser radar itself.
We know that autonomous driving consists of three parts: perception, decision-making and execution; LiDAR plays an important role in the perception link.
We all learned in school that radar is made by emitting ultrasonic waves imitating bats; the working principle of lidar is similar to this. It determines the distance by emitting a laser beam to reach an object, then receiving the laser beam reflected from the object, and measuring the time difference and phase difference of the laser signal.
This is just like the process of hitting and catching the ball when we play table tennis. The difference is that humans use their eyes and brain to judge the speed and rotation of the ball; while lidar uses hardware such as laser transmitters and receivers combined with software algorithms to determine the direction of obstacles.
But if you think that LiDAR can only determine the position of obstacles, then you are underestimating it. LiDAR is protected by four major systems: the laser transmitter, receiver and scanner, and information processing system mentioned above.
While the laser emitter periodically emits laser beams, the scanning system is of course not willing to sit idle; it will be busy collecting depth information of the target surface in order to obtain relatively complete spatial characteristics of the measured target; with the help of the information processing system, the collected information is reconstructed into a three-dimensional surface, thereby forming a three-dimensional graphic that is easy for us to understand.
Therefore, just seeing what LiDAR can do is amazing, and it is even more difficult to develop a LiDAR that meets automotive standards (so far, only Valeo has produced a mass-produced LiDAR that meets automotive standards - SCALA). In addition, given the important role that LiDAR will play in promoting autonomous driving in the future, it is understandable that Luminar was favored by the public in the early days of its launch.
Even if you are strong, you can't beat a fat-butt man
Laser radar not only has the superb skill of constructing clear 3D images of targets, but also has the excellent qualities of high resolution, strong penetration and anti-interference ability, and all-weather operation.
It can be said that lidar has an inherent superiority over other radars and cameras.
As we all know, He Xiaopeng, the founder of Xpeng Motors, is a fan of lidar. He once made a statement at this year's Guangzhou Auto Show that Xpeng Motors will upgrade its autonomous driving software and hardware systems starting with models produced in 2021, using lidar technology to improve Xpeng Motors' object recognition performance.
However, the seemingly powerful laser radar cannot win everyone's heart.
Especially the fat-butted man Musk, who not only sneered at Xpeng Motors' own LiDAR route: "Xpeng Motors' software level is backward and has no neural network computing capabilities"; he also publicly stated: "LiDAR is like a bunch of appendices growing on a person's body. The existence of the appendix itself is basically meaningless; any company that relies on LiDAR may die without a disease."
In Lao Ma’s opinion, humans drive safely by collecting information through vision and processing information through the brain, which means that autonomous driving can also be achieved through the same visual perception + algorithm decision-making.
So he insisted on using the visual fusion mode; in Tesla's hardware system Autopilot HW 2.0, 8 cameras are installed to provide 360-degree surround view function, including a front three-eye camera (long-range narrow angle of view, medium-range medium angle of view and short-range fisheye), 2 cameras facing the front and rear on each side, plus 1 rear camera.
You should know that visual signals are video data collected by cameras, which are composed of colorful photos. You can observe carefully that the size of clear photos we take with our mobile phones in our daily life is close to 10MB; and in the process of self-driving cars, the amount of data per second from each camera will reach MB (1024KB); when 8 cameras work at the same time, the amount of data per second will be even greater.
In contrast, the data collected by sensors such as laser radar, millimeter wave radar, and ultrasonic radar are just standardized data packets. Take the VLP-16 16-line laser radar produced by Velodyne, which has received a joint investment of US$150 million (approximately RMB 980 million) from Baidu and Ford, for example. The data length of each frame is fixed at 1248 bytes, which is about 1KB. The output of this laser radar is 480 frames per second, so the total amount of data per second is only about 500KB.
Therefore, in autonomous driving, the visual perception fusion solution built by superimposing cameras has much higher requirements on system computing power than the lidar solution.
It is for this reason that Tesla, which is betting on the vision fusion solution, decided to abandon Nvidia's Xavier autonomous driving chip developed based on GPU and turn to independently developing FSD chips with higher computing power.
Visual fusion, a long way to go
Do you think that the only disadvantage of cameras compared to lidar is that they require high system computing power?
Of course not. The vehicle must accurately locate itself in autonomous driving before it is qualified to consider the question of "where to go". Unfortunately, it is very difficult to use cameras to achieve self-positioning; while LiDAR can obtain the global position and driving direction of the car on the high-precision map by continuously matching its own detection data with the high-precision map in real time.
Maybe you will say that it doesn’t matter if the camera cannot realize its own positioning very well, doesn’t it have GPS, and positioning should be its task?
We need to know that the positioning accuracy of GPS itself is not enough, which is something that everyone has experienced deeply when using navigation to find a destination. In fact, the positioning accuracy of GPS depends on the accuracy provided by the satellite; in principle, the positioning accuracy of the system provided by the United States can reach 5 meters, but the civilian signals received by ordinary people are technically processed, and the best accuracy can only reach 10 meters; when the car is driving around high-rise buildings, in and out of tunnels, etc., the positioning accuracy of 10 meters will become a luxury.
However, the internationally recognized requirement for autonomous driving positioning accuracy is 10 centimeters. Do you think it is reliable to rely on GPS to position autonomous driving? Of course, the positioning accuracy of my country's Beidou system will be higher in the future, but military signals are expected to only reach 1 meter, and civilian signals will certainly not be able to meet the precise positioning requirements of autonomous driving.
In addition, in my opinion, lidar can be compared to the sun in terms of detection and imaging, while the camera is like the earth.
The earth itself is neither luminous nor transparent. The light source comes from the sun, and the earth rotates to produce day and night. When the camera is working, just like the earth, it needs to obtain external light sources; therefore, when the light is dim at night, strong light is shining, or there are bright objects, the data collected by the camera is difficult to be effectively and reliably perceived by the algorithm.
The laser radar is not affected by external light sources when working. It actively detects and images by emitting laser beams. It can directly measure the distance, direction, depth, reflectivity and other information of the object.
It is for this reason that Tesla has installed 12 ultrasonic sensors and an enhanced forward millimeter-wave radar around the body of its car to make up for the lack of vision; but the occasional tragic accidents that Tesla has encountered in the market still show that its visual fusion solution still has a long way to go.
Autonomous driving, cost is still king
Lidar has many natural advantages over cameras in autonomous driving, but its price is also ridiculously high. Take Velodyne's Lidar as an example. A 16-beam Lidar costs $4,000 (about RMB 26,000), and a 64-beam Lidar costs as much as $80,000 (about RMB 524,000); while the hardware cost of a camera is only a few hundred dollars.
Not only that, as mentioned earlier, the visual fusion mode requires more powerful system computing power. One of the important reasons is that the camera can obtain rich texture colors, thereby achieving refined recognition and tracking. However, the color texture of the signal collected by the lidar is not rich and is not suitable for signal tracking. Therefore, when a vehicle adopts a lidar solution, it must still adopt a mode that is fused with a relatively small number of cameras; and this will further increase the corresponding vehicle's pre-installed factory cost.
Previous article:Which one is the future of autonomous driving: LiDAR or visual algorithm?
Next article:Sivers Photonics receives funding from UK government agency to develop quantum-based lidar
Recommended ReadingLatest update time:2024-11-16 13:43
- Popular Resources
- Popular amplifiers
- LiDAR point cloud tracking method based on 3D sparse convolutional structure and spatial...
- GenMM - Geometrically and temporally consistent multimodal data generation from video and LiDAR
- Comparative Study on 3D Object Detection Frameworks Based on LiDAR Data and Sensor Fusion Technology
- Dual Radar: A Dual 4D Radar Multimodal Dataset for Autonomous Driving
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- Officially provided: National Technology N32G457xx data package
- AND
- China's chip industry
- msp430 ultrasonic distance measurement idea
- TPS563209 output voltage noise optimization and testing based on 500MHz bandwidth
- Regarding loop compensation, this article is enough
- Espressif ESP32-C3 Function Introduction (Preview: ESP32-C3-DevKitM-1 Review will be launched on May 17)
- [GD32L233C-START Review] Display Control of DWIN Smart Screen
- Why does MCU also need AI?
- How to design solutions for wireless charging of wearable devices