introduction
With the rapid development of my country's road traffic, the number of cars has exceeded 150 million and continues to grow. The vigorous development of road transportation has provided strong support for my country's transportation industry, but it also brings huge traffic safety risks. Road traffic accidents have become the top of various accidents and are a major obstacle to the establishment of a transportation system with sustainable development of transportation safety. Therefore, it is imperative to establish a road traffic safety assurance system through technical means to reduce traffic accidents. Based on the analysis of the driver's visual function during driving, this paper introduces and analyzes the research status of various automobile safety technologies based on vehicle-mounted machine vision, and looks forward to the development trend of this field.
1. Driving process description
According to the stimulus-body-response classic model of human behavior, the behavior of driving a car can be divided into three stages, as shown in Figure 1, namely the perception stage, the judgment and decision stage, and the operation stage. In the perception stage, the driver obtains and initially understands the real-time traffic status information and perceives the operating environment conditions of the car through the sensory organs; in the judgment and decision stage, the driver combines driving experience and skills, analyzes and judges through the central nervous system, and determines measures that are conducive to the safe driving of the car; in the operation stage, the driver makes actual reactions and actions through the motor organs based on the judgment and decision. When the car is driving, the driving behavior is a continuous information processing process composed of these three stages, that is, perception acts on judgment and decision, thereby affecting operation. The perception stage is the basis for ensuring safe driving. If accurate and timely environmental information is not perceived, it is very likely to lead to errors in judgment, decision and action, resulting in traffic accidents. In the perception stage, information is mainly obtained through vision, touch, smell and hearing, of which more than 80% is obtained through the driver's vision. Driving vision directly affects the breadth, depth and accuracy of the amount of perceived information. Therefore, the driver's visual characteristics are directly related to driving safety. The automotive safety assisted driving technology based on on-board machine vision aims to improve the driver's visual effectiveness by improving the relationship between vision and driving behavior and assisting driving to reduce improper operations caused by visual reasons, thereby making the human-vehicle-road system more stable and reliable and improving the active safety of the vehicle.
2 Machine vision-assisted driving technology based on vehicle external information
The human eye has limited capabilities. Machine vision-assisted driving technology that obtains a series of vehicle external information can improve visual adaptability, increase visual range, and enhance the depth of visual understanding. From the perspective of vehicle operation, the research on machine vision-assisted driving technology for vehicle external information includes: visual enhancement and expansion of the driving environment and machine vision recognition of the driving environment.
2.1 Visual enhancement, expansion and display of driving environment
2.1.1 Visual Enhancement
The visual enhancement system is one of the advanced vehicle control technologies in the intelligent transportation system, which can provide enhanced driver vision in different climates (fog, rain, dust) and at different times of the day. There are generally two enhancement methods: 1. Monitor the road traffic environment through the sensor perception system, process information to obtain real-time road traffic conditions, and provide relevant visual information to the driver, so as to achieve the purpose of intelligent visual enhancement; 2. Improve the driver's visual environment and improve the driver's visual effect. It is mainly to remove rain and frost on the windshield, improve the intelligence of the car headlights, etc., to achieve the purpose of enhancing the driver's vision under adverse conditions such as low visibility and low illumination.
Using the visual characteristics of the human eye, CCD, infrared sensors, vehicle speed sensors, GPS, millimeter wave radar and other sensors are used to obtain road information, process and fuse information, extract useful information of the traffic environment under low visibility and low illumination, remove noise, and provide it to the driver in the form of images. The low visibility visual enhancement system was first used in aircraft landing. From the late 1980s to the early 1990s, people proposed the concept of vision system. The vision system composed of different means and different comprehensive methods is divided into:
(1) Sensor Vision System (SensorVS)
The visual scene outside the cockpit detected in real time by the forward-looking sensor can be generated by a single sensor or integrated by multiple sensors, and its vision is close to the natural scene in the real world.
(2) Synthetic Vision System (SVS)
The virtual scene constructed from the terrain model stored in the terrain database is called synthetic scene (SV).
(3) Enhanced Vision System (EVS)
The superposition of sensor vision and synthetic vision is called enhanced vision. It includes both the natural vision detected in real time and the virtual vision generated by the database. The two are matched and superimposed, that is, the deep contours of the virtual vision are used to enhance the blurred vision. It includes two systems, SensorVS and SVS, which can enhance the visibility of the view outside the window under severe weather conditions [1].
2.1.2 Visual Extension
Vision expansion is to compensate for the driver's vision and use vision and other sensors to expand the driver's field of view. For example, Ford's CamCar uses multiple tiny cameras and three switchable video displays to provide the driver with front and rear vision, making parking operations easier and improving driving safety in crowded traffic. CamCar's technical features include:
(1) Forward-facing camera system.
Installed on both sides of the car, it provides a field of view to bypass obstacles. The coverage angle can reach 22°, which is equivalent to a 116m wide field of view at a distance of 300m.
(2) Enhanced side vision.
The second part of the CamCar camera system consists of two rear-facing cameras that provide a continuous rear view of the adjacent lane. The coverage is much wider than a traditional rear-view mirror. This allows the driver to monitor vehicles coming from behind before changing lanes. This rear view has virtually no blind spots. The rear-facing cameras are mounted on the side of the car, similar to side-view mirrors. Their lenses provide a wide field of view, with a coverage angle of 49° on each side.
(3) Panoramic view from behind the vehicle.
CamCar's rearward vision is enhanced by four micro cameras precisely designed and mounted on the rear of the car. The four cameras fan out to capture the road conditions in a wide area behind the car with four separate images. These images are fed into a complex computer program for comparison and superposition, and then synthesized into a seamless panoramic view with a total coverage angle of up to 160°.
2.1.3 Display Technology
Road environment image display and road environment warning equipment are interfaces for interaction between drivers and vehicles, and their design should have good human factors characteristics. At present, there are two main types of vehicle-mounted information display devices: head-down display and head-up display. Head-down display is mainly used in vehicle navigation system and multimedia system, and its design and application are relatively mature. For example, Ford's CamCar dashboard is equipped with three video display screens, one central display screen and two side additional display screens. The displayed image can be changed according to the specific situation to provide the driver with the most important information. The head-up display is mostly used in the automobile safety auxiliary driving display system, which can facilitate the driver to quickly browse the road environment and warning information on the screen when the car is driving at high speed. Its design is still in the development and improvement stage.
2.2 Machine Vision Recognition of Driving Environment
Machine vision recognition of the driving environment is a higher level of automotive safety assisted driving technology. It uses image sensors to identify road environment parameters and determine driving safety. It mainly includes: lane detection, vehicle detection, pedestrian detection, road sign detection, etc.
2.2.1 Lane Detection
Currently, lane detection is mostly achieved through the detection of road markings and road edges. Typical driving safety assistance systems in lane detection include lane departure warning system (Lane Departure Warning System) and turning deceleration adjustment system.
The lane departure warning system consists of a camera, speed sensor, information processing system, steering wheel regulator, alarm system, etc. Once the vehicle tends to deviate from the lane, the indicator light and buzzer will warn the driver. When the driver's turn signal operation determines that the lane change is conscious, the alarm will be temporarily stopped. The system switch can be cut off, but the system will automatically start working when the vehicle is started again. The lane departure warning system mostly uses a monocular camera to detect the image of the road marking. In order to increase the reliability of the system in detecting road markings, the ITS Center of the Japan Automobile Research Institute explored the use of binocular CCD cameras and real-time differential GPS systems to detect the deviation of running vehicles from road markings.
2.2.2 Vehicle Detection
Vehicle detection is the use of various sensors to detect information about vehicles in front, on the side, and behind, including the speed and position of the front and rear vehicles, as well as the size and position of obstacles. Related automobile driving safety auxiliary support systems include adaptive cruise control system (ACC), forward collision warning system (FCW), lateral collision warning system (LDW), and parking assistance system (Parking Assistance System). In ACC and FCW, 77GHZ microwave radar or camera is used to collect information in front of the road, and the road geometry and electronic map data are integrated as input signals for automobile cruise control or displayed to the driver. In LDW, cameras, front detection radars, and side detection radars are used to collect the front and side information of the vehicle, and the road width and other data are integrated as input data for the LDW system. In the parking assistance system, ultrasonic sensors or radars are used to detect obstacle information behind and on the side of the vehicle and display it to the driver. ACC, FCW, LDW, parking assistance systems, etc. have all been studied in Japan's ASV (Advanced Safety Vehicle), the United States' IVI (Intelligent Vehicle Initiative), and Europe's e-Safety projects.
2.2.3 Traffic Sign Detection
Road traffic signs are important ancillary facilities for road traffic safety, which can provide drivers with various guidance and restraint information.
Drivers can obtain traffic sign information correctly in real time to ensure safer driving. In the automobile safety assisted driving system, the detection of traffic signs is achieved through the image recognition system. DaimlerChrysler is currently conducting research on a new generation of image recognition systems. The system first judges the shape of the road sign method, and then reads the text and graphic information in the above shape to make a final judgment. When it is difficult to judge the sign, the driver can also use the pre-recorded electronic map data related to the road sign for identification. BMW also used image recognition technology to study traffic signs in the ADAS (Advanced Driver Assistance Systems) project research. In addition, Toyota of Japan is also actively developing an automatic traffic sign recognition system. Abroad, many researchers have conducted multi-faceted explorations in the study of traffic sign image recognition algorithms. Traffic sign image recognition includes several processes such as traffic sign positioning (i.e. determining the area of interest) and classifier design. The color of the traffic sign and the background and the shape of the traffic sign are clearly specified in the traffic engineering standards, so positioning research can be carried out based on the color and shape of the traffic sign. Due to the large number of traffic signs and the many environmental factors affecting the shooting of traffic sign images, most of the traffic sign pattern classifier design studies are nonlinear classifiers. Traffic sign morphological skeleton and use matching algorithm to recognize traffic signs.
2.2.4 Pedestrian Detection Technology
Pedestrian detection based on computer vision in vehicle assisted driving systems refers to using a camera installed on a moving vehicle to obtain video information in front of the vehicle, and then detecting the position of pedestrians from the video sequence. Pedestrian detection systems based on computer vision generally include two modules: ROIs segmentation and target recognition. The purpose of ROIs segmentation is to quickly determine the area where pedestrians may appear and narrow the search space. The commonly used method is to use a distance-based method using a stereo camera or radar, which has the advantage of being relatively fast. The purpose of target recognition is to accurately detect the position of pedestrians in ROIs. The commonly used method is a shape recognition method based on statistical classification, which has the advantage of being relatively robust. Due to its great application prospects in pedestrian safety, the European Union continuously funded the PROTECTOR and SAVE-U projects from 2000 to 2005, and developed two pedestrian detection systems with computer vision as the core; the ARGO smart car developed by the University of Parma in Italy also includes a pedestrian detection module; Israel's MobilEye has developed a chip-level pedestrian detection system; Japan's Honda Motor Company has developed a pedestrian detection system based on infrared cameras; in China, Xi'an Jiaotong University, Tsinghua University, and Jilin University have also done a lot of research in this field.
3. Machine vision-assisted driving technology for vehicle interior information
Machine vision-assisted driving technology for vehicle internal information uses the on-board video camera to determine the driver's status, position and other information, and implement necessary safety measures, including driver's line of sight adjustment and driving fatigue detection.
3.1 Sight Adjustment
The driver's sight adjustment is to make each driver's eyes at the same relative height, to ensure an unobstructed view of the road and surrounding lanes and the best visibility, thus ensuring driving safety. This technology includes:
(1) The eye position sensor can measure the position of the driver's eyes and then determine and adjust the seat position accordingly;
(2) The motor automatically raises the seat to the optimal height, providing the driver with the best view of the road conditions;
(3) The motor automatically adjusts the steering wheel, pedals, central console and even the floor height to provide the most comfortable driving position possible. The sight line adjustment system has been applied in some high-end cars, such as Volvo's sight line adjustment system, which uses a video camera located in the windshield trim to scan the driver's seat area to find a pattern representing the driver's face, and then scans the driver's face to determine the position of his eyes, and then finds the center of each eye. It takes less than 1 second to complete these three steps.
3.2 Fatigue and distraction detection
Since fatigue driving is the main cause of major traffic accidents, research institutions at home and abroad have carried out research in this field. Compared with awake driving, the more specific indicators of fatigue driving are: fine adjustment of the steering wheel, forward tilt of the head, blinking of the eyelids, and even closing. In the current research on driving fatigue detection and monitoring systems, vehicle-mounted machine vision systems are often used to monitor human posture and operating behavior information to determine fatigue status. The AWAKE driving diagnosis system was developed in the European e-Safety project. The system uses visual sensors and steering wheel control force sensors to obtain driver information in real time, and uses artificial intelligence algorithms to determine the driver's status (awake, possible drowsiness, drowsiness). When the driver is in a fatigued state, the driver is stimulated by sound, light, vibration, etc. to restore the driver to a sober state. Reference [34] uses self-developed special cameras, electroencephalographs and other instruments to accurately measure head movements, pupil diameter changes and blinking frequency to study driving fatigue problems. The research results show that:
Normally, people's eyes are closed for 0.12 to 0.13 seconds. If the eyes are closed for 0.15 seconds while driving, it is very likely to cause a traffic accident.
4 Conclusion
More than 80% of the information of the driver is obtained through vision. In view of the insufficiency of the driver's vision, the development of automobile safety assisted driving system based on vehicle-mounted machine vision has always been one of the research hotspots of intelligent transportation. This paper reviews the current status of the technology in this field and the conclusions are as follows:
1) Analyze the driving operation process and describe the three stages of driving operation;
2) Based on the scope of information acquisition, automobile safety assisted driving is divided into: machine vision for external information and machine vision technology for internal information. Machine vision technology for external information is divided into: visual enhancement, field of view expansion, and road environment understanding, while machine vision technology for internal information is divided into: line of sight tracking and driver fatigue monitoring. The research status of machine vision technology in automobile safety assisted driving systems is reviewed;
3) The article analyzes the current research deficiencies in machine vision technology in automotive safety assisted driving systems, and points out that technologies such as low-visibility driver vision enhancement methods, road environment understanding information fusion, and driver fatigue detection need further research.
Previous article:Development and application of automotive CAN bus test platform
Next article:Improving Automotive Design Performance with MCUs
- Popular Resources
- Popular amplifiers
- Molex leverages SAP solutions to drive smart supply chain collaboration
- Pickering Launches New Future-Proof PXIe Single-Slot Controller for High-Performance Test and Measurement Applications
- CGD and Qorvo to jointly revolutionize motor control solutions
- Advanced gameplay, Harting takes your PCB board connection to a new level!
- Nidec Intelligent Motion is the first to launch an electric clutch ECU for two-wheeled vehicles
- Bosch and Tsinghua University renew cooperation agreement on artificial intelligence research to jointly promote the development of artificial intelligence in the industrial field
- GigaDevice unveils new MCU products, deeply unlocking industrial application scenarios with diversified products and solutions
- Advantech: Investing in Edge AI Innovation to Drive an Intelligent Future
- CGD and QORVO will revolutionize motor control solutions
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- [RISC-V MCU CH32V103 Review] ---Advancing Wiki---RTC
- Download dsp hardware paper, divided into two parts, use cajviewer to browse
- Dual-machine piezoelectric strapdown inertial navigation system based on MCU and DSP
- Learn how to get started with embedded systems.
- [AT-START-F403A Evaluation] Part 1 FreeRTOS System IAR Environment Construction
- STM32CubeIDE imports the STM32 project automatically generated by Gizwits Online (based on the MDK environment
- Clear (set) CCS5.5 eclipse workspace record
- MSP430F5529 key interrupt scanning and pwm program
- FPGA_100 Days Journey_PS2 Design
- Are there any netizens who work in the motor industry?