Article count:10350 Read by:146647018

Account Entry

From now on, lidar and camera are the same thing?

Latest update time:2018-09-05
    Reads:
Che Lizi from Aofei Temple
Produced by Quantum Bit | Public Account QbitAI


In recent years, deep learning research on cameras has been booming. In contrast, there has not been much academic progress on LiDAR .

However, the data collected by LiDAR has many advantages. For example, it has rich spatial information , and it does not affect perception even when there is insufficient light .

Of course, there are also disadvantages . LiDAR data lacks the original resolution of RGB images and the efficient array structure . In addition, 3D point clouds are difficult to encode in neural networks .


It would be great if we could combine lidar and cameras into one device .

How to “eliminate” cameras?

Ouster , a lidar manufacturer , is a new company founded by Angus Pacala, the former co-founder of Quanergy, a unicorn in the field.

Ouster co-founder and CEO

Last November, the company launched the OS-1 lidar, hoping to break the boundary between lidar and camera from here .

The central idea is that as long as the lidar data is good enough , even deep learning algorithms designed for processing RGB images can be used.

Pacala said that OS-1 can now output fixed-resolution depth images , signal images , and ambient images in real time .

None of these tasks require the involvement of a camera .

Jello effect is easy to occur in high-speed relative motion

The data layers are spatially related . When shooting high-speed moving objects, it is not easy to produce shutter effects .

In addition, the aperture of the OS-1 is larger than that of most SLR cameras, making it suitable for scenes with insufficient light.

The team also developed a photon counting ASIC with very low light sensitivity to capture environmental images in low light conditions .

From top to bottom: environment, intensity, range image, and point cloud

The device can capture signals and environmental information in the near-infrared band , and the data obtained is not much different from ordinary visible light images.

In this way, the algorithm used to analyze RGB images can also process lidar data.

Friends can also use Ouster's (just updated its firmware) open source driver to convert the data into 360-degree panoramic dynamics:

The animated image is compressed

The data output by the sensor does not require preprocessing , and this is the effect.

Run the data

As I said just now, as long as the data is good enough, you can use the algorithms developed for cameras to do deep learning.

Encode the depth , intensity and environment information into a vector, just like an RGB image can be encoded into red, green and blue channels .

So, what is the quality of OS-1's data?

The data runs happily

Pacala said that the algorithms they used worked very well with lidar.

For example, they trained a pixel-wise semantic classifier to distinguish between drivable roads, other cars, pedestrians, and bicycles.

Here is San Francisco, running the classifier on an Nvidia GTX 1060, generating this semantic segmentation effect in real time:

Semantic segmentation: road is road and car is car

This is the first implementation the team did.

The data is pixel-by-pixel, so it is possible to seamlessly translate 2D into 3D frames for real-time processing such as bounding box estimation.

In addition, the team separated the depth, signal, and surrounding environment and ran them separately in the neural network.

As an example, we use the pre-trained neural network from the SuperPoint project to run intensity and depth images.

The network was trained on RGB images and had never been exposed to LiDAR/depth data.

Still semantic segmentation, but running the intensity (top) and depth (bottom) data separately

Pacala said that LiDAR distance measurement may not be very good in regular geometric environments such as tunnels and highways; and visual distance measurement will be at a loss in the absence of texture changes and lack of lighting.

OS-1 uses a multimodal approach to combine the two, and the therapeutic effect is different.

1 + 1 > 2, which is probably what Ouster meant.

Not really on the road yet

In early 2015, Angus Pacala left Quanergy.

In the same year, Ouster was founded in Silicon Valley.

Stand out from the crowd

In December 2017, the company announced the completion of a $ 27 million Series A financing and launched the OS-1, priced at $3,500 .

The pace is not fast, but I have found my own way.

The image semantic segmentation algorithm initially affirmed their results.

However, it is still unknown how the laser radar that combines the properties of a camera will perform when installed on an autonomous vehicle.

Medium original text portal:

https://medium.com/ouster/the-camera-is-in-the-lidar-6fcf77e7dfa6

GitHub Portal:

https://github.com/ouster-LIDAR

-over-



Latest articles about

 
EEWorld WeChat Subscription

 
EEWorld WeChat Service Number

 
AutoDevelopers

About Us Customer Service Contact Information Datasheet Sitemap LatestNews

Room 1530, Zhongguancun MOOC Times Building,Block B, 18 Zhongguancun Street, Haidian District,Beijing, China Tel:(010)82350740 Postcode:100190

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号