Introducing an efficient and lightweight model for drivable area and lane segmentation in autonomous vehicles

Publisher:cloudsousou6Latest update time:2023-08-07 Source: elecfans Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

This paper presents TwinLiteNet: an efficient and lightweight model for drivable area and lane segmentation in autonomous vehicles. Semantic segmentation is a common task for understanding the surrounding environment in autonomous driving. Drivable area segmentation and lane detection are particularly important for safe and efficient navigation on roads. However, the original semantic segmentation model is computationally expensive and requires high-end hardware, which is not feasible for embedded systems in autonomous vehicles.


In this paper, we propose a lightweight model for drivable area and lane segmentation. TwinLiteNet is designed to be cheap, but can achieve accurate and efficient segmentation results. We evaluate TwinLiteNet on the BDD100K dataset and compare it with modern models.

Experimental results show that TwinLiteNet performs similarly to existing methods and requires significantly less computational resources. Specifically, TwinLiteNet achieves a 91.3% mIoU score in the drivable area segmentation task and a 31.08% IoU score in the lane detection task using only 400,000 parameters, and achieves a frame rate of 415FPS on the GPU RTX A5000. In addition, TwinLiteNet can run in real time on embedded devices with efficient computing power, especially because it achieves a frame rate of 60FPS on the Jetson Xavier NX, making it an ideal solution for self-driving cars.

Main Contributions

The main contributions of this paper are as follows:

1) This paper proposes a computationally efficient framework for drivable area segmentation and lane detection;

2) The architecture proposed in this paper is based on ESPNet, which is a scalable convolutional segmentation network that combines deep separable convolutions and dual attention networks, but instead of using a single decoding module, two decoding modules are used for each task, similar to YOLOP and YOLOPv2;

3) The experimental results in this paper show that TwinLiteNet achieves good performance on various image segmentation tasks with fewer parameters.

Paper Figures and Tables

picture

picture

picture

picture

picture

picture

picture

picture

picture

picture

picture

Summarize

This paper introduces a lightweight and efficient segmentation model for autonomous driving tasks, specifically drivable area segmentation and lane detection. Our TwinLiteNet aims to achieve high processing speed with a slight trade-off in accuracy. We evaluate TwinLiteNet on the BDD100K dataset and show that our model achieves a good balance between accuracy and high computational speed on GPUs and even edge devices. In the future, we attempt to evaluate the performance of the TwinLiteNet model on various open source datasets and apply it to real-world scenarios. This approach allows us to evaluate its effectiveness in different situations and address practical challenges.


Reference address:Introducing an efficient and lightweight model for drivable area and lane segmentation in autonomous vehicles

Previous article:Design of McLennan chassis motion control system based on RT-Thread+RA6M4
Next article:How to use Simcenter Amesim to evaluate the hydrogen refueling time of fuel cell electric vehicles?

Latest Embedded Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号