Shen Shaojie, the person in charge of DJI’s vehicle-mounted equipment, led a team to publish a new paper.
As the development of high-end intelligent driving in urban areas accelerates, this has brought considerable challenges to the system's high-precision positioning and real-time mapping capabilities.
Against this background, the Shen Shaojie team from the University of Hong Kong, Hong Kong University of Science and Technology, Shanghai Jiao Tong University and Beijing Institute of Technology has brought a new method based on cameras, lidar and IMU: LIV-GaussMap.
The accuracy of the map constructed in real time through this solution is comparable to that of photos:
More importantly, the hardware, software, and data sets collected by the team required for this method are all open source.
High-precision positioning technology route
Taking the autonomous driving system as an example, modularity can usually be divided into three parts: perception, positioning and control.
The often mentioned "emphasis on perception, light on map" route, the map here refers to the positioning module.
The higher the level of autonomous driving, the higher the accuracy requirements for positioning.
Because the higher the accuracy, the clearer the system understands the surrounding environment. Lane lines, traffic lights, etc. are all "clear at a glance", which makes it easier for the system to make driving decisions, and they are correct decisions.
Generally speaking, there are three high-precision positioning technology routes, satellite signal positioning (GNSS+RTK), inertial navigation positioning (INS/IMU) and environmental feature matching positioning.
And since the three technical solutions each have their own advantages and disadvantages, in practice, a combination of multiple solutions is usually used.
For example, the most common satellite positioning and inertial navigation can be positioned through loose coupling, tight coupling or deep coupling, which is commonly known as satellite-inertial combination.
Satellites can correct errors in the inertial navigation system and improve accuracy; inertial navigation can prevent vehicles from being unable to obtain positioning information when satellite signals are blocked or interfered with.
However, even if multiple solutions are integrated, in urban conditions where the satellite is severely blocked (tunnels, viaducts), or even completely "lost" in underground parking lots, it returns to a single technical solution: relying only on habitual solutions. guide.
If the inertial navigation system is not corrected by external information for a long time, errors will continue to accumulate, resulting in the inability to provide accurate positioning results.
Therefore, new ways of decoupling habits and introducing information from other sensors were born. For example, visual inertial odometry (VIO) using inertial navigation and cameras, and similar LIO using lidar.
The team of this paper chose to introduce camera and lidar information at the same time and proposed a new solution.
LIV-GaussMap: Generate photorealistic maps in real time
The LIV-GaussMap system integrates information from lidar (LiDAR), inertial measurement unit (IMU) and camera (Visual), performs multi-modal sensor fusion, and can generate 3D radiation field maps in real time. It is a tight coupling generated in a new way. map.
The map generated by this method, whether it is a small-scale scene (a) or a large-scale scene (b), has significantly improved accuracy and does not produce strange artifacts.
How is this done?
The system first performs hardware synchronization, that is, precise time alignment between the images generated by the camera and the point cloud generated by the lidar to prepare for data fusion.
Subsequently, the system uses a fusion system of lidar and IMU for real-time positioning and map construction, and provides a preliminary Gaussian structure of the scene and initial point cloud data based on iterative error state Kalman filtering (IESKF).
Next, the system uses the image brightness gradient to optimize the Gaussian structure and spherical harmonic coefficients, and uses the optimized results to obtain the surface Gaussian model.
Finally, photometric interpolation and extrapolation are used to synthesize a new view, and the system can achieve adaptive control and dynamically adjust the structure of the map according to the distribution and density of the point cloud to ensure that a map with details and no gaps is generated.
Based on real data sets, compared with other state-of-the-art algorithms, such as Plenoxel, F2-NeRF, 3D-GS, etc., the effects generated by LIV-GaussMap are more realistic and more accurate.
From the score point of view, LIV-GaussMap also outperforms other methods in many results. For example, PSNR, an indicator that can measure image quality, and LPIPS, an image similarity measure based on deep learning, LIV-GaussMap has the optimal effect whether it is interpolation or extrapolation.
Real-time 3D mapping, not just cars
Judging from the actual generation effect, the biggest feature of LIV-GaussMap is that it can accurately restore the surface features and geometric structures of various objects, and can construct detailed maps.
The brightness gradient data introduced from the image can help solve the problem of unreasonable lidar point cloud distribution or inaccurate measurement.
At the same time, the LIV-GaussMap system has no requirements for the type of lidar, whether it is solid-state or mechanical lidar, and supports both repetitive and non-repetitive scanning modes.
Moreover, although SLAM's ability to construct high-precision maps in real time is indispensable for high-end intelligent driving, LIV-GaussMap can also be applied in fields such as digital twins, virtual reality, and robots that require real-time rendering of 3D scenes.
Judging from the background of the thesis team, they all come from multiple majors.
The authors Sheng Hong and Junjie He contributed equally to this article. Sheng Hong is from the Department of Computer and Electronic Engineering of the Hong Kong University of Science and Technology and is a doctoral student in the Aerial Robot Group of the Institute of Robotics.
Junjie He and Xinhu Zheng, one of the authors, are both from the System hub of Hong Kong University of Science and Technology (Guangzhou).
One of the authors, Chunran Zheng, comes from the University of Hong Kong. He graduated from Xi'an Jiaotong University with a bachelor's degree in automation. He is currently pursuing a PhD in the MaRS Laboratory of the University of Hong Kong.
The other authors of this article are all members of the Institute of Electrical and Electronics Engineers (IEEE), including Hesheng Wang, a professor from the Automation Department of Shanghai Jiao Tong University. They have published more than 150 papers in domestic and foreign journals and academic conferences, and have been cited by SCI more than 200 times. He has been cited by Google Scholar more than 1,400 times, has served as chairman of IEEE conferences for many times, and is also the deputy editor-in-chief of several robotics-related journals.
There is also Professor Fang Hao from the School of Automation of Beijing Institute of Technology. He received a doctorate from Xi'an Jiaotong University in 2002. He has participated in a number of national defense pre-research projects and published more than 40 academic papers.
The corresponding author of this article is Professor Liu Kangcheng, from the Hong Kong University of Science and Technology (Guangzhou). He studied at the Chinese University of Hong Kong for his PhD in active robotics and 3D vision. He has served as the program chair of more than 20 international conferences and the program chair of more than 10 top international journals. Committee member or reviewer.
The last author is Shen Shaojie, an associate professor in the Department of Computer and Electronic Engineering of the Hong Kong University of Science and Technology, the director of the Hong Kong University of Science and Technology-DJI Joint Innovation Laboratory (HDJI Lab), and the person in charge of the DJI vehicle.
Shen Shaojie holds a PhD in electrical and systems engineering from the University of Pennsylvania. His research fields include state estimation, sensor fusion, positioning and mapping of robots and drones. He has not only served as chairman and senior editor of international robotics conferences and journals for many times, but also He has won the AI 2000 Most Influential Scholar Award many times.
Currently, LIV-GaussMap’s hardware, software, and the data sets collected by the team will be open sourced on github. Those who are interested can wait and see~
Previous article:Why chiplets are so important in the automotive world
Next article:OCC gets on the bus and drives intelligently. "Spiritual consciousness" awakens. The next stop may be a car robot.
- Popular Resources
- Popular amplifiers
- A new chapter in Great Wall Motors R&D: solid-state battery technology leads the future
- Naxin Micro provides full-scenario GaN driver IC solutions
- Interpreting Huawei’s new solid-state battery patent, will it challenge CATL in 2030?
- Are pure electric/plug-in hybrid vehicles going crazy? A Chinese company has launched the world's first -40℃ dischargeable hybrid battery that is not afraid of cold
- How much do you know about intelligent driving domain control: low-end and mid-end models are accelerating their introduction, with integrated driving and parking solutions accounting for the majority
- Foresight Launches Six Advanced Stereo Sensor Suite to Revolutionize Industrial and Automotive 3D Perception
- OPTIMA launches new ORANGETOP QH6 lithium battery to adapt to extreme temperature conditions
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions
- TDK launches second generation 6-axis IMU for automotive safety applications
- LED chemical incompatibility test to see which chemicals LEDs can be used with
- Application of ARM9 hardware coprocessor on WinCE embedded motherboard
- What are the key points for selecting rotor flowmeter?
- LM317 high power charger circuit
- A brief analysis of Embest's application and development of embedded medical devices
- Single-phase RC protection circuit
- stm32 PVD programmable voltage monitor
- Introduction and measurement of edge trigger and level trigger of 51 single chip microcomputer
- Improved design of Linux system software shell protection technology
- What to do if the ABB robot protection device stops
- Detailed explanation of intelligent car body perception system
- How to solve the problem that the servo drive is not enabled
- Why does the servo drive not power on?
- What point should I connect to when the servo is turned on?
- How to turn on the internal enable of Panasonic servo drive?
- What is the rigidity setting of Panasonic servo drive?
- How to change the inertia ratio of Panasonic servo drive
- What is the inertia ratio of the servo motor?
- Is it better for the motor to have a large or small moment of inertia?
- What is the difference between low inertia and high inertia of servo motors?
- Live broadcast at 2 pm today [Introduction to TI MSP430 capacitive touch development]
- Effect of Temperature on Performance and Life of Communication Switching Power Supplies
- Comparison of two PWM motor drives
- CircuitPython 6.0.0 Alpha 2 released
- Here is the stock: I am talking about the out-of-stock FT232. You can also get eeworld rewards by placing an order~
- Comics Database
- How to identify the parameters of ceramic capacitors
- 10 ways 5G will change our daily lives, the last one is thought-provoking
- STC8 MCU Problem
- Ultra-low voltage power devices: room for GaN?