双雷达:一种用于自动驾驶的双4D雷达多模态数据集
Abstract—Radar has stronger adaptability in adverse scenarios
for autonomous driving environmental perception compared to
widely adopted cameras and LiDARs. Compared with commonly
used 3D radars, latest 4D radars have precise vertical resolution
and higher point cloud density, making it a highly promising
sensor for autonomous driving in complex environmental perception. However, due to the much higher noise than LiDAR,
manufacturers choose different filtering strategies, resulting in
a direct ratio between point cloud density and noise level.
There is still a lack of comparative analysis on which method
is beneficial for deep learning-based perception algorithms in
autonomous driving. One of the main reasons is that current
datasets only adopt one type of 4D radar, making it difficult to
compare different 4D radars in the same scene. Therefore, in
this paper, we introduce a novel large-scale multi-modal dataset
featuring, for the first time, two types of 4D radars captured
simultaneously. This dataset enables further research into effective 4D radar perception algorithms. Our dataset consists
of 151 consecutive series, most of which last 20 seconds and
contain 10,007 meticulously synchronized and annotated frames.
Moreover, our dataset captures a variety of challenging driving
scenarios, including many road conditions, weather conditions,
nighttime and daytime with different lighting intensities and
periods. Our dataset annotates consecutive frames, which can be
applied to 3D object detection and tracking, and also supports
the study of multi-modal tasks. We experimentally validate our
dataset, providing valuable results for studying different types of
4D radars. This dataset is released on https://github.com/adeptthu/Dual-Radar
You Might Like
Recommended ContentMore
Open source project More
Popular Components
Searched by Users
Just Take a LookMore
Trending Downloads
Trending ArticlesMore