无人驾驶系统中基于学习的相机和激光雷达模拟方法综述
Abstract—Perception sensors, particularly camera and Lidar,
are key elements of Autonomous Driving Systems (ADS) that
enable them to comprehend their surroundings for informed
driving and control decisions. Therefore, developing realistic
camera and Lidar simulation methods, also known as camera
and Lidar models, is of paramount importance to effectively
conduct simulation-based testing for ADS. Moreover, the rise
of deep learning-based perception models has propelled the
prevalence of perception sensor models as valuable tools for
synthesising diverse training datasets. The traditional sensor
simulation methods rely on computationally expensive physicsbased algorithms, specifically in complex systems such as ADS.
Hence, the current potential resides in learning-based models,
driven by the success of deep generative models in synthesising
high-dimensional data. This paper reviews the current state-ofthe-art in learning-based sensor simulation methods and validation approaches, focusing on two main types of perception
sensors: cameras and Lidars. This review covers two categories
of learning-based approaches, namely raw-data-based and objectbased models. Raw-data-based methods are explained concerning
the employed learning strategy, while object-based models are
categorised based on the type of error considered. Finally,
the paper illustrates commonly used validation techniques for
evaluating perception sensor models and highlights the existing
research gaps in the area.
Index Terms—Learning-based, deep generative models, perception sensor models, image synthesis, 3D point cloud synthesis,
camera, Lidar, autonomous driving systems, simulation.
You Might Like
Recommended ContentMore
Open source project More
Popular Components
Searched by Users
Just Take a LookMore
Trending Downloads
Trending ArticlesMore