Take you to learn more about autonomous driving sensors

Publisher:知识的海洋Latest update time:2021-11-02 Source: eefocus Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

With the development of automobile electrification, intelligence, networking and sharing, the prototype of autonomous driving has gradually emerged. Although autonomous driving cars have not yet been fully commercialized, many car companies have begun to deploy L3 or even L4 level autonomous driving cars.

 

The National Highway Traffic Safety Administration (NHTSA) of the United States divides autonomous driving into six levels according to the control of the vehicle during driving behavior (Table 1), among which

 

Level 0 means no automation. The driver needs to master all mechanical and physical functions of the vehicle at all times. The vehicle is only equipped with an alarm device to provide safety warnings to the driver during driving. The vehicle does not participate in any driving behavior process.

 

Level 1 is a level where the driver operates the vehicle, but the vehicle can be equipped with features such as an anti-lock braking system that can help with driving safety. These features only assist driving safety, and the control of the vehicle is still in the hands of the driver;

 

At level L2, the driver still operates the vehicle, and the vehicle is equipped with functions such as cruise control system and blind spot detection system to reduce the driver's operating burden;

 

Level 3 means that in some driving scenarios, the vehicle can be controlled autonomously without the involvement of the driver. However, when the vehicle detects road conditions that require the driver to control the vehicle, the driver will be asked to perform subsequent supervision immediately.

 

Level 4 means that the vehicle can fully complete autonomous driving under certain conditions, generally without the intervention of the driver. At this time, the vehicle can take passengers to the destination according to the set program. However, in harsh climates or when the road is unclear, unexpected, or the road environment does not meet the conditions for autonomous driving, the vehicle will remind the driver and give the driver enough time to take over. At Level 4, the driver still needs to supervise the operation of the vehicle;

 

With L5 autonomous driving, the driver does not need to be in the cab, and the vehicle does not need driver monitoring. It can independently complete autonomous driving behaviors in all road environments. The vehicle can make the best road planning and decisions for the target point.

 

Autonomous driving classification

name

definition

Driving Operation

Peripheral monitoring

take over

Application Scenario

L0

Manual driving

The vehicle is fully driven by a human driver

Human Driver

Human Driver

Human Driver

none

L1

Assisted Driving

The vehicle provides driving for one of the following operations: steering and acceleration/deceleration, and the human driver is responsible for the rest of the driving actions.

Human drivers and vehicles

Human Driver

Human Driver

Limited scene

L2

Partially automated driving

The vehicle provides driving for multiple operations such as steering and acceleration and deceleration, and the human driver is responsible for the rest of the driving actions.

vehicle

Human Driver

Human Driver

L3

Conditional autonomous driving

The vehicle performs most of the driving operations, and the human driver needs to stay focused in case of emergency

vehicle

vehicle

Human Driver

L4

Highly automated driving

The vehicle performs all driving operations, and the human driver does not need to stay focused, but the road and environmental conditions are limited.

vehicle

vehicle

vehicle

L5

Fully autonomous driving

The vehicle performs all driving operations, and the human driver does not need to stay focused

vehicle

vehicle

vehicle

All scenes

 

The essence of autonomous driving is that the vehicle can complete driving behavior autonomously. This process requires the vehicle as a carrier and is a strongly coupled system engineering assisted by hardware and software. Like humans, autonomous vehicles are a process of perception, planning, control, and execution when driving on the road. These processes are inseparable from the coordination of hardware and software. If software is the brain of an autonomous vehicle, playing a role in planning and controlling behaviors, then hardware is the nerves and limbs of an autonomous vehicle, playing a role in perception and execution.

 

Execution is when the vehicle performs actions such as acceleration, deceleration, and parking according to the software planning. This link is completed by the vehicle. Perception is a process of receiving information. It is the source of the vehicle's autonomous driving actions and provides the required road information for software planning and control. This link is more like the eyes of the car, checking the surrounding environmental data, performing static and dynamic object recognition, detection, and tracking, etc., which allows the autonomous driving vehicle and the driver to quickly perceive possible dangers to improve active safety during driving. This process requires the collaboration of many sensors. Autonomous driving perception is mainly composed of sensors such as on-board cameras , millimeter-wave radars , lidars , and ultrasonic radars . Different sensors play different roles and complete different task requirements during the autonomous driving process.

 

in-vehicle cameras

On-board cameras are the basis for realizing many advanced driver-assistance functions such as warning and recognition, and are also more intuitive for drivers. Advanced driver-assistance functions such as lane departure warning, forward collision warning, traffic sign recognition, lane keeping assist, pedestrian collision warning, panoramic parking, driver fatigue warning, etc. all require the assistance of on-board cameras to achieve.

 

The main hardware of the vehicle-mounted camera includes optical lens, image sensor, image signal processor ISP, serializer , connector, etc. The optical lens includes optical lens, filter and protective film, etc. The optical lens is responsible for focusing light and projecting the object in the field of view onto the surface of the imaging medium. Depending on the requirements of the imaging effect, multiple layers of optical lenses may be required. The filter can filter out the light band that is invisible to the human eye, leaving only the visible light band of the actual scene within the field of view of the human eye. The image sensor can use the photoelectric conversion function of the photoelectric device to convert the light image on the photosensitive surface into an electrical signal that is proportional to the light image. It is mainly divided into CCD and CMOS . The ISP image processor mainly uses the hardware structure to complete the pre-processing of the RAW format data of the image video source input by the image sensor, which can be converted into formats such as YCbCr, and can also complete image scaling, automatic exposure, automatic white balance, automatic focus and other tasks. The serializer transmits the processed image data and can be used to transmit various types of image data such as RGB and YUV. The connector is mainly used to connect fixed cameras.

 

The manufacturing process and reliability requirements of vehicle-mounted cameras are also higher than those of industrial cameras and commercial cameras. The working environment of a car is changeable, and sometimes it works in a harsh environment. In order to cope with high and low temperature environments; strong vibrations; high humidity and heat and other complex working conditions, vehicle-mounted cameras can be divided into five categories according to their installation positions: front-view cameras, surround-view cameras, rear-view cameras , side cameras and interior cameras.

 

The front-view camera is used most frequently. A single camera can realize multiple functions. Through algorithm development and optimization, a single front-view camera can realize multiple functions such as driving record, lane departure warning, forward collision warning, pedestrian recognition, traffic sign recognition, etc. The front-view camera is mainly installed on the front windshield to realize the visual perception and recognition function during the vehicle driving process. According to different functions, the front-view camera can be divided into the front-view main camera, the front-view narrow-angle camera and the front-view wide-angle camera (Figure 1). The front-view main camera is used as the main camera in the L2-level advanced driver assistance system. The function of the front-view wide-angle camera is mainly to identify objects at a close distance. It is used in scenes such as urban road conditions and low-speed driving. The main function of the front-view narrow-angle camera is to identify targets such as traffic lights and pedestrians. The types of front-view cameras mainly include monocular and binocular. Among them, the binocular front-view camera has a better ranging function, but it needs to be installed in two positions.

 

Figure 1 Tesla front-view camera module

 

Surround view cameras are mainly wide-angle lenses, installed around the vehicle, used for image stitching to achieve a panoramic view, and adding algorithms can achieve road perception. Surround view cameras can be divided into forward fisheye cameras, left fisheye cameras, right fisheye cameras, and rear fisheye cameras. Rear view cameras are mainly wide-angle or fisheye lenses, and are mainly used to assist parking functions. Due to the limited range of the rearview mirror, the vehicle will cause blind spots for the driver when driving. The existence of blind spots poses a huge hidden danger to driving safety. Side view cameras will become the best equipment to replace rearview mirrors. After adding side cameras, blind spots can be covered. When an object enters the blind spot, the driver will be reminded, realizing the blind spot detection function. The internal view camera is a camera installed in the car to monitor the driver's driving status. If the driver is found to have a fatigued expression or behaviors that are not conducive to safe driving while driving, the information can be obtained in time and reminded. Tesla, as a fanatic of pure visual autonomous driving, has installed 8 on-board cameras on the car body, which are matched with the Autopilot system to realize the autonomous driving function. Since the on-board cameras cannot detect depth, Tesla, which only uses pure visual autonomous driving solutions, finds it difficult to respond in time when there are ghost-peeked road conditions.

[1] [2]
Reference address:Take you to learn more about autonomous driving sensors

Previous article:How can new car manufacturers avoid "capacity hell"?
Next article:What will the automotive aftermarket look like in a self-driving world?

Latest Embedded Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号