Design and application of driverless cars and their operating systems

Publisher:vnerwb池塘边Latest update time:2020-10-30 Source: eefocus Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

introduction:

The driverless car in this paper is designed to provide one-to-one short-distance shuttle services for athletes, staff and visitors in the Shougang Park during the 2022 Winter Olympics, creating a high-quality travel service platform. A design scheme for the driverless driving and its operation system based on the Shougang Park was proposed, and the stable operation of the driverless car in the Shougang Park was realized. This system consists of two parts: the vehicle-mounted driverless subsystem and the human-vehicle-cloud trinity operation subsystem.

 

1. The policies and regulations for driverless cars are not yet complete

 

At present, the biggest challenge to the popularization of driverless cars is the low public acceptance and low trust in their safety and reliability. However, since the country's policies and regulations on driverless cars have not yet been perfected, driverless cars cannot be driven on public roads.


In view of the above two points, the National Innovation Center took the lead and integrated industry resources to jointly create an L4 driverless car, which is now running stably in the Shougang Park. The overall system plan is shown in Figure 1.

 

 

2. Design and implementation of unmanned driving system

 

The unmanned driving system obtains environmental information related to the driving task, such as the vehicle itself, surrounding obstacles and roads, through a variety of on-board sensors, and provides this information to decision-making planning. Decision-making planning then plans a suitable path based on the environmental information, vehicle status and user needs obtained through perception and positioning, and then uses this information to control the driving status of the vehicle.


Different autonomous driving levels and operating environments require different solutions for autonomous driving. Aiming at the characteristics of the Shougang Park, which has tall buildings, dense trees, and complex road conditions, this project proposed a sensor solution based on 3 laser radars, 1 millimeter-wave radar, 2 cameras, 12 ultrasonic radars, and 1 combined navigation unit. The sensor installation location is shown in Figure 2.


2.1 Sensors Among the three laser radars, the 32-line laser radar is arranged on the top of the vehicle, and the two 16-line laser radars are arranged on both sides of the top of the vehicle. They are used to detect the environmental information and obstacle information around the vehicle, and obtain the size and orientation information of the obstacles. They have the advantages of high ranging accuracy, accurate orientation, wide measurement range, and strong anti-interference ability.


The millimeter wave radar uses a 77GHz medium- and long-range radar and is placed inside the front bumper of the vehicle. It is used to detect moving targets in front of the vehicle and obtain the target's speed and direction information. It has good speed and distance measurement capabilities, is less affected by the outside world, and can work all day long.


The main camera is placed on the top of the vehicle, and the front vision camera is attached to the middle of the inner side of the windshield, which is used to detect the obstacle information, road information, sign information and traffic light information in front of the vehicle, and obtain the type of obstacle and road environment information. It has the advantage of accurate obstacle classification.

 

12 ultrasonic radars are arranged around the vehicle (4+4 front and rear, 2+2 left and right) to detect close-range obstacle information around the vehicle, ensuring that the unmanned vehicle can enter and exit the warehouse autonomously.


Two GPS antennas are placed on the top of the vehicle, and one inertial navigation unit is placed in the trunk of the vehicle to obtain the vehicle's position and positioning information.


2.2 Software Architecture of Unmanned Driving System This project uses pure electric vehicles as a platform and is equipped with the above five sensors to achieve accurate perception of road environment information. This information is integrated through multi-sensor information fusion technology to reduce the probability of misjudgment and improve the stability and accuracy of information output.


We designed and developed multi-sensor fusion algorithms, positioning combination algorithms, decision-making planning algorithms, and vehicle control algorithms, and realized functions such as autonomous following, autonomous overtaking and merging, lane keeping, automatic passage at traffic intersections, and obstacle avoidance on open and closed park roads. We wrote unmanned driving test cases, formulated unmanned driving test specifications, and completed the testing of the unmanned driving system.


In order to achieve this goal, the software architecture of the unmanned driving system is divided into sensor interface layer, perception layer, positioning layer, decision layer and vehicle control layer.


The sensor interface layer includes the input of various peripheral sensors. The perception layer collects data from various sensors, performs multi-level and multi-space information complementation and optimization combination processing, and finally realizes all-round perception of the surrounding environment. The multi-sensor fusion solution is shown in Figure 3.

 

Positioning layer: A global map is constructed based on the data information of the laser radar and the integrated navigation unit. The perception results of the laser radar, camera, millimeter-wave radar, and ultrasonic radar are fused and processed to establish a local perception map centered on the driving vehicle. And through the superposition of GPS information, vehicle position and posture information, a real-time comprehensive map is provided to intuitively understand the processing results of various information in the driving environment.


Decision-making layer: In the global environment, it generates an optimal global path based on the road network, tasks, and positioning information; in the local environment, it relies on perception information and under the constraints of traffic rules to infer reasonable driving behaviors in real time and generate safe and drivable areas;


Generate a smooth possible driving route based on vehicle speed and road complexity; analyze static and dynamic obstacles and traffic regulations to form local path planning, and make driving strategy decisions and send them to the vehicle control layer. At the same time, handle and recover system failures and accept high-level control.


Vehicle control layer: Based on the path planning results and various sensor information inside the vehicle, it generates control commands for the vehicle gear, throttle, and direction to keep the vehicle running smoothly and at high speed to achieve autonomous driving. The
software system architecture is shown in Figure 4.

 


3. Design and implementation of the human-vehicle-cloud trinity human-computer interaction system


The human-machine interaction system is a threshold for the commercialization of driverless cars. It is of great significance to both the driverless car industry and users [2]. Currently, users are still curious and skeptical about driverless cars, and are far from trusting and accepting them. In this context, the human-machine interface of driverless cars becomes even more important.


It needs to serve as a bridge of communication between users and cars, allowing users to understand the real-time status of the car and create a safe driving experience for users; it also needs to help users build a sense of trust with driverless cars, allowing users to transition more harmoniously from traditional cars to the driverless era [3].


The research goal of the human-computer interaction subsystem of this project is to meet the user's driving needs and create a safe and convenient driving experience for users through the reasonable design of the human-computer interaction interface of the driverless car.


3.1 Functional Overview The driverless vehicle runs on a fixed route within the park. A series of stations will be set up along the route. A QR code for booking a ride is affixed to the signboard of each station. Passengers near any station can scan the QR code on the signboard to enter the booking interface, select the starting point and destination, and click OK to successfully place the order.


The order is sent directly to the cloud, which dispatches the driverless vehicles in operation according to the order situation and sends the order's operating route to the mobile phone and the vehicle. The driverless vehicle picks up passengers at the starting point according to the received operating route and delivers them to the designated destination. Users can evaluate the service based on their driving experience, and the order is completed.


3.2 System Design This system adopts B/S, C/S multi-layer architecture and supports multiple network access methods. The user end uses browsers, H5 and APP to reduce the workload of system installation and maintenance; it is easy for users to use and does not require training; the system is easy to expand; and it supports remote business processing. The business logic runs on the server side, making full use of the server's processing power;


By combining Web load balancing, component load balancing, etc., the system can handle more service requests and meet the growing system performance requirements by horizontally expanding the server. The logical architecture diagram of the human-computer interaction subsystem is shown in Figure 5, and the system interaction data is shown in Figure 6.

 

 

4. Design process construction


Based on the user-centered design concept, the information of driverless cars is made as transparent as possible to help users easily judge the reliability of the car, thereby establishing a sense of responsibility for the car, promoting the public's acceptance of driverless cars, and promoting driverless cars to enter people's lives. The design process is shown in Figure 7.

[1] [2]
Reference address:Design and application of driverless cars and their operating systems

Previous article:List of domestic and foreign millimeter wave radar companies’ financing
Next article:FEV and Uniper collaborate to develop mobile power sources for electric vehicles

Latest Embedded Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号