LiDAR: Why doesn’t Musk want me?

Publisher:Heavenly999Latest update time:2020-12-15 Source: 十一车Keywords:Lidar Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

A few days ago, the share price of Luminar Technologies (stock code LAZR), a lidar supplier that completed a reverse merger and listing, was like taking a strong drug, soaring day by day, with a cumulative increase of 114% in three days.




You may be unfamiliar with Luminar. It is the second Silicon Valley company to complete its listing and is committed to providing key sensors for self-driving cars; its big brother is known as Velodyne Lidar Inc.

 

As for which of the two is better in lidar technology, I am really not interested. What I care about is why lidar companies are so favored by the market in the early stages of their listing.


What are the real capabilities of the powerful laser radar?


Investors in the stock market exude a shrewd aura from every pore from head to toe (except for small investors); the companies that allow them to speculate heavily naturally have good development prospects and their moats are naturally unusually wide.


The laser radar used in vehicle autonomous driving obviously has broad prospects. To discuss whether the company's moat is wide enough, we have to start with the characteristics of the laser radar itself.

 

We know that autonomous driving consists of three parts: perception, decision-making and execution; LiDAR plays an important role in the perception link.




We all learned in school that radar is made by emitting ultrasonic waves imitating bats; the working principle of lidar is similar to this. It determines the distance by emitting a laser beam to reach an object, then receiving the laser beam reflected from the object, and measuring the time difference and phase difference of the laser signal.

 

This is just like the process of hitting and catching the ball when we play table tennis. The difference is that humans use their eyes and brain to judge the speed and rotation of the ball; while lidar uses hardware such as laser transmitters and receivers combined with software algorithms to determine the direction of obstacles.

 

But if you think that LiDAR can only determine the position of obstacles, then you are underestimating it. LiDAR is protected by four major systems: the laser transmitter, receiver and scanner, and information processing system mentioned above.

 

While the laser emitter periodically emits laser beams, the scanning system is of course not willing to sit idle; it will be busy collecting depth information of the target surface in order to obtain relatively complete spatial characteristics of the measured target; with the help of the information processing system, the collected information is reconstructed into a three-dimensional surface, thereby forming a three-dimensional graphic that is easy for us to understand.




Therefore, just seeing what LiDAR can do is amazing, and it is even more difficult to develop a LiDAR that meets automotive standards (so far, only Valeo has produced a mass-produced LiDAR that meets automotive standards - SCALA). In addition, given the important role that LiDAR will play in promoting autonomous driving in the future, it is understandable that Luminar was favored by the public in the early days of its launch.


Even if you are strong, you can't beat a fat-butt man


Laser radar not only has the superb skill of constructing clear 3D images of targets, but also has the excellent qualities of high resolution, strong penetration and anti-interference ability, and all-weather operation.


It can be said that lidar has an inherent superiority over other radars and cameras.

 

As we all know, He Xiaopeng, the founder of Xpeng Motors, is a fan of lidar. He once made a statement at this year's Guangzhou Auto Show that Xpeng Motors will upgrade its autonomous driving software and hardware systems starting with models produced in 2021, using lidar technology to improve Xpeng Motors' object recognition performance.




However, the seemingly powerful laser radar cannot win everyone's heart.

 

Especially the fat-butted man Musk, who not only sneered at Xpeng Motors' own LiDAR route: "Xpeng Motors' software level is backward and has no neural network computing capabilities"; he also publicly stated: "LiDAR is like a bunch of appendices growing on a person's body. The existence of the appendix itself is basically meaningless; any company that relies on LiDAR may die without a disease."




In Lao Ma’s opinion, humans drive safely by collecting information through vision and processing information through the brain, which means that autonomous driving can also be achieved through the same visual perception + algorithm decision-making.

 

So he insisted on using the visual fusion mode; in Tesla's hardware system Autopilot HW 2.0, 8 cameras are installed to provide 360-degree surround view function, including a front three-eye camera (long-range narrow angle of view, medium-range medium angle of view and short-range fisheye), 2 cameras facing the front and rear on each side, plus 1 rear camera.

 

You should know that visual signals are video data collected by cameras, which are composed of colorful photos. You can observe carefully that the size of clear photos we take with our mobile phones in our daily life is close to 10MB; and in the process of self-driving cars, the amount of data per second from each camera will reach MB (1024KB); when 8 cameras work at the same time, the amount of data per second will be even greater.




In contrast, the data collected by sensors such as laser radar, millimeter wave radar, and ultrasonic radar are just standardized data packets. Take the VLP-16 16-line laser radar produced by Velodyne, which has received a joint investment of US$150 million (approximately RMB 980 million) from Baidu and Ford, for example. The data length of each frame is fixed at 1248 bytes, which is about 1KB. The output of this laser radar is 480 frames per second, so the total amount of data per second is only about 500KB.

 

Therefore, in autonomous driving, the visual perception fusion solution built by superimposing cameras has much higher requirements on system computing power than the lidar solution.

 

It is for this reason that Tesla, which is betting on the vision fusion solution, decided to abandon Nvidia's Xavier autonomous driving chip developed based on GPU and turn to independently developing FSD chips with higher computing power.




Visual fusion, a long way to go


Do you think that the only disadvantage of cameras compared to lidar is that they require high system computing power?

 

Of course not. The vehicle must accurately locate itself in autonomous driving before it is qualified to consider the question of "where to go". Unfortunately, it is very difficult to use cameras to achieve self-positioning; while LiDAR can obtain the global position and driving direction of the car on the high-precision map by continuously matching its own detection data with the high-precision map in real time.

 

Maybe you will say that it doesn’t matter if the camera cannot realize its own positioning very well, doesn’t it have GPS, and positioning should be its task?

 

We need to know that the positioning accuracy of GPS itself is not enough, which is something that everyone has experienced deeply when using navigation to find a destination. In fact, the positioning accuracy of GPS depends on the accuracy provided by the satellite; in principle, the positioning accuracy of the system provided by the United States can reach 5 meters, but the civilian signals received by ordinary people are technically processed, and the best accuracy can only reach 10 meters; when the car is driving around high-rise buildings, in and out of tunnels, etc., the positioning accuracy of 10 meters will become a luxury.




However, the internationally recognized requirement for autonomous driving positioning accuracy is 10 centimeters. Do you think it is reliable to rely on GPS to position autonomous driving? Of course, the positioning accuracy of my country's Beidou system will be higher in the future, but military signals are expected to only reach 1 meter, and civilian signals will certainly not be able to meet the precise positioning requirements of autonomous driving.

 

In addition, in my opinion, lidar can be compared to the sun in terms of detection and imaging, while the camera is like the earth.

 

The earth itself is neither luminous nor transparent. The light source comes from the sun, and the earth rotates to produce day and night. When the camera is working, just like the earth, it needs to obtain external light sources; therefore, when the light is dim at night, strong light is shining, or there are bright objects, the data collected by the camera is difficult to be effectively and reliably perceived by the algorithm.

 

The laser radar is not affected by external light sources when working. It actively detects and images by emitting laser beams. It can directly measure the distance, direction, depth, reflectivity and other information of the object.




It is for this reason that Tesla has installed 12 ultrasonic sensors and an enhanced forward millimeter-wave radar around the body of its car to make up for the lack of vision; but the occasional tragic accidents that Tesla has encountered in the market still show that its visual fusion solution still has a long way to go.


Autonomous driving, cost is still king


Lidar has many natural advantages over cameras in autonomous driving, but its price is also ridiculously high. Take Velodyne's Lidar as an example. A 16-beam Lidar costs $4,000 (about RMB 26,000), and a 64-beam Lidar costs as much as $80,000 (about RMB 524,000); while the hardware cost of a camera is only a few hundred dollars.




Not only that, as mentioned earlier, the visual fusion mode requires more powerful system computing power. One of the important reasons is that the camera can obtain rich texture colors, thereby achieving refined recognition and tracking. However, the color texture of the signal collected by the lidar is not rich and is not suitable for signal tracking. Therefore, when a vehicle adopts a lidar solution, it must still adopt a mode that is fused with a relatively small number of cameras; and this will further increase the corresponding vehicle's pre-installed factory cost.

[1] [2]
Keywords:Lidar Reference address:LiDAR: Why doesn’t Musk want me?

Previous article:Which one is the future of autonomous driving: LiDAR or visual algorithm?
Next article:Sivers Photonics receives funding from UK government agency to develop quantum-based lidar

Recommended ReadingLatest update time:2024-11-16 13:43

ams and Ibeo make significant progress in bringing solid-state LiDAR technology to the automotive market
ams AG, a leading global supplier of high-performance sensor solutions, and Ibeo Automotive Systems GmbH, a German expert and global technology leader in automotive LiDAR sensor technology and related software, recently announced that their cooperation project to bring solid-state LiDAR technology for autonomous drivi
[Automotive Electronics]
ams and Ibeo make significant progress in bringing solid-state LiDAR technology to the automotive market
Solid-state LiDAR or mechanical LiDAR?
Ouster LiDAR OS-1 (left) and OS-2 (center) Velodyne invented the modern 3D LiDAR scanner in the mid-2000s, but in recent years conventional wisdom has held that Velodyne’s design — 64 lasers mounted on a spinning platform — would soon be obsoleted by a new generation of solid-state LiDAR sensors that use a single,
[Automotive Electronics]
Solid-state LiDAR or mechanical LiDAR?
As of July this year, LiDAR companies have made great progress
This article introduces the products launched and technical cooperation by LiDAR-related companies in June and July 2021. The information collected may not be comprehensive and is only for readers to increase their understanding of the LiDAR industry trends.   01. LightWare launches the world’s smallest and lightest
[Embedded]
As of July this year, LiDAR companies have made great progress
iPad Pro (4th generation) teardown report is out
Recently, the well-known disassembly team iFixit released a video report on the disassembly of the recently released 2020 new iPad Pro (4th generation), giving everyone a glimpse into the true appearance of the A12Z Bionic processor chip  and built-in LiDAR optical radar scanner inside the body for the first time.   C
[Embedded]
iPad Pro (4th generation) teardown report is out
What unique advantages does Velodyne LiDAR have to enable autonomous driving safety?
Although installing LiDAR in cars has only gradually become popular in the automotive industry in recent years as autonomous driving technology has become popular, this has not affected LiDAR's rapid emergence and has become one of the most promising black technologies in the current automotive field. According to Sta
[Automotive Electronics]
What unique advantages does Velodyne LiDAR have to enable autonomous driving safety?
Latest Automotive Electronics Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号