According to statistics, traffic accidents caused by blind spots behind vehicles account for about 30% in China and 20% in the United States. The first two generations of reversing assistance products are reversing horns and reversing radars. The former can only remind passers-by to dodge on their own, but the driver is completely unaware of it. Fixed obstacles cannot be detected, and the effect is minimal. Although the latter can inform the driver of fixed obstacles through alarms, the driver still cannot determine the exact location of the obstacles, let alone detect pits or low obstacles.
At present, the research trend at home and abroad is to use digital image processing technology on the basis of reversing radars, and use powerful embedded processors to develop a vehicle-mounted visual reversing device that combines the advantages of detecting the distance to objects behind the vehicle and monitoring the image behind the vehicle.
Therefore, this paper proposes a design of an auxiliary reversing system based on S3C2410. The system not only allows the driver to observe the real scene of the rear of the car in the car, but also can measure the distance in real time through the ranging alarm module of the system, and implement voice alarm to the driver when the car is too close to the obstacle, thus overcoming the shortcomings of small rear mirror and narrow field of view, making reversing faster and more efficient, and enhancing the safety of reversing.
2. System Overview
The system uses S3C2410 as the main controller, with a main frequency of up to 266MHz, and uses Linux2.6.14.1 operating system. The overall design of the system can be divided into three parts:
1) Image data real-time display module design;
2) Distance measurement alarm module design;
3) Human-computer interaction interface design.
The image data real-time display module mainly realizes the real-time acquisition of image data and displays it on the LCD display. The system collects real-time images behind the car through the camera installed at the rear of the car and displays the images on the terminal LCD. The distance measurement alarm module uses ultrasonic distance measurement circuit to measure the distance. When the measured distance value exceeds the safety distance set by the system, a voice alarm is implemented. The human-computer interaction interface module provides a good human-computer interaction interface for the system. With the touch screen, the operation is convenient and simple. The system framework structure diagram is shown in Figure 1.
3. Implementation of image data real-time display module
The image data real-time display module uses the v3000USB camera of Mesh Eye Company, which publicly supports the ov511 chip in the Linux kernel, to collect images. Its implementation is divided into two parts: loading the camera driver module in the Linux kernel and designing a visual reversing application based on Qt.
3.1 Dynamic loading of USB driver module
When customizing and compiling the embedded Linux kernel, add support for the Video4Linux module and OV511 device, and obtain image frames from the OV511 device through the programming interface (API) provided by the Video4Linux module. Configure the Linux kernel as follows: Multimedia device->< M > VideoforLinux; usb support->< M > usb camera ov511, compile the video4Linux driver and OV511 camera driver in a modular way, and use the command insmod to load the USB and OV511 device driver modules. After loading the driver, insert the OV511 camera into the USB interface, the camera will be correctly identified and generate the /dev/v4l/video device.
3.2 Visual reversing program design
During the reversing process, the driver has few interactions with the software, so there is no need to design a complex user interface. The design of the program mainly focuses on the acquisition of camera images. Video acquisition under Linux is completed through the interface functions and related data structures provided by the Video4Linux driver. The process of using Video4Linux to operate a USB camera is shown in Figure 2.
In the program design, the operations and data structures related to the camera are encapsulated in the VideoCapture class. Its member functions complete the specific operations of the camera through the interface provided by Video4Linux. The VideoCapture class is designed as follows:
Class VideoCaPture
{ Public:
…
bool hasCamera() const
//Judge whether there is an available camera
Void getCameralmage(Qlmage & img),
//Get the image data of the camera
QSize captureSize()const;//Return the resolution used by the camera
Void setCaptureSize(QSizesize);//Set the resolution used by the camera
int minimumFramePeriod() const;
Private:
…
struct video_mbuf mbuf;//Frame information for memory mapping
…
void setupCamera(QSize size);//Camera initialization and parameter setting function
void shut down();//Close the camera
};
The void setupCamera(QSize size)member function in the VideoCapture class is used to initialize the USB camera device. According to the characteristics of the system LCD display, the main parameters are set: the image color mode is set to VIDEO_PALETTE_RGB565 format; the image resolution is set to 640*480; the image bit depth is set to 16 bits. After each frame of data is collected, the ioctl(fd, VIDIOCSYNC, &frame) function is called to wait for the collection to end, and then the next frame of image is collected or the camera is closed as needed.
Because ov511 does not support the system call read method to obtain image data, the memory mapped input/output (MMIO) method is used to obtain the image data. When the MMIO method is used to obtain image data, the image memory information is saved in the variable video_mbuf memoryBuffer. Therefore, before collecting image data, you first need to use the VIDIOCGMBUF interface in Video4Linux to obtain the information required by MMIO, and then use the mmap function to map the camera's image buffer to virtual memory and use the VIDIOCMCAPTURE interface in Video4Linux to capture the image. The function voidgetCameraImage(QImage & img) is used to complete the complete collection process of a frame of image and realize the continuous collection and display of image data through a timer. Whenever the timer is up, a timer event is triggered, and the getCameraImage function is called in the event to complete the collection and display of the image.
4. Implementation of distance measurement alarm module
In order to improve the safety and reliability of the system, the distance measurement alarm function is added. After the visual reversing function is activated, the ultrasonic distance measurement module is used to realize the real-time distance measurement of obstacles behind the car and the speech synthesis module is used to realize the voice alarm when the obstacle is too close to the car body.
4.1 Ultrasonic distance measurement module The
ultrasonic distance measurement circuit is mainly composed of ultrasonic transmission circuit and receiving circuit. The principle block diagram is shown in Figure 3. The module microcontroller uses Freescale's MC68HC908QL4, which has high reliability and strong anti-interference ability. Ultrasonic waves detect the distance between vehicles and objects, and transmit the data to the main processor in the car for processing. [page]
Since ultrasonic ranging only provides information behind the car to the driver when the car is reversing, and the speed of the car is slow when reversing, it can be considered as stationary compared to the speed of sound. Therefore, the transit time method is used to measure the distance, that is, the ultrasonic transmitter continuously emits ultrasonic waves, which are reflected back after encountering obstacles. The ultrasonic receiver receives the reflected wave signal and converts it into an electrical signal. The time difference from emitting ultrasonic waves to receiving reflected waves can be measured to calculate the distance s.
In the formula, s is the measured distance; c is the ultrasonic sound speed; t is the time difference from emitting ultrasonic waves to receiving reflected waves.
4.2 Ultrasonic ranging software design
The ultrasonic ranging software mainly includes ranging and data transmission, and its flow chart is shown in Figure 4.
4.3 Speech Synthesis Module
This system uses the speech synthesis chip XF-3011, which communicates with s3C2410 through the serial port. When s3c2410 communicates with the speech synthesis chip XF-3011, all commands and data sent to XF-3011 need to be encapsulated in a "frame" manner and then sent to the chip through the serial port. The maximum length of the frame is 204 bytes (including the frame header mark byte). The specific frame format is shown in Table 1.
Table 1 Speech synthesis chip communication transmission data frame format
After XF-3011 receives the control command. When XF-3011 receives a correct command frame, it will immediately feedback "0x41". If it is a speech synthesis command, then start synthesizing the received text data. After all texts are synthesized, feedback "0x4F" to the upper computer, and then play the voice.
4.4 Speech Synthesis Module Flowchart
When the distance measured by the ranging module exceeds the safety distance set by the system, the main processor sends a control command to XF-3011 to start the speech synthesis function to remind the driver to pay attention.
5. Implementation of human-computer interaction interface module
This system uses Qtopia embedded desktop environment. Users can not only conveniently manage system resources and programs, but also realize screen display of programs based on frame buffer mechanism, so as to achieve friendly interaction with users.
5.1 Establishment of graphical interface system
This system uses a graphical interface system based on Qt/Embedded. The compilation of Qt/Embedded is mainly divided into three steps:
① Compile Qt/X11. Qt/X11 runs on standard Linux in IBM compatible PC. It mainly provides graphical compilation environment and simulation running environment for Qt/Embedded and its applications;
② Compile Qt/Embedded. Qt/Embedded provides a series of function libraries for graphical interface systems and applications running on embedded Linux platform;
③ Compile Qtopia. By using the compilation tools provided by Qt/X11 and the function libraries provided by Qt/Embedded, the final graphical interface system based on embedded Linux platform and the application in this environment are compiled.
5.2 Expanding the Visual Reversing Program
The Qtopia graphical desktop environment provides a good mechanism for expanding applications on the Qtopia platform. To publish an application on the Qtopia platform, three files are required: an executable file, a launcher file, and an icon file. Here, the executable file of the visual reversing program is the executable file named car. The icon file is a .Png file. We can make a car.png ourselves. The launcher file is a .desktop file. You can refer to the existing .desktop in Qtopia as an example to create a car.desktop. The specific content is as follows:
[Desktop Entry]
comment=A car program
Exec=car
Icon=car
Type=Application
Name=car
After completing the above content, copy the three files to the corresponding directories of Qtopia respectively. Copy the icon file Car.png to the pics directory under the Qtopia directory, copy the executable file car to the bin directory under the Qtopia directory, and copy the launcher file car.desktop to the apps/Applications directory in the Qtopia directory. After copying, restart Qtopia and click the icon of the visual reversing function on its interface. The visual reversing function starts, and the result is shown in Figure 6.
6. Conclusion
This paper introduces a design of an auxiliary reversing system based on S3C2410. Since the system uses a high-performance, low-power embedded microprocessor s3c2410 and a combination of visual reversing and ranging alarm functions, the video signal actually captured by the system camera can be well displayed on the LCD and meet the performance requirements of real-time ranging of car reversing, overcoming the shortcomings of small rear mirror and narrow field of view, eliminating the potential accidents caused by blind reversing, and greatly improving the safety efficiency of car reversing. After experimental debugging, the auxiliary reversing system can run well, is easy to operate, and basically meets the design requirements.
Previous article:Problems encountered in M0 nuclear transplantation UCOSII
Next article:Design of web server by embedding STM32 and ENC28J60 into uip
Recommended ReadingLatest update time:2024-11-24 18:28
- Naxin Micro and Xinxian jointly launched the NS800RT series of real-time control MCUs
- How to learn embedded systems based on ARM platform
- Summary of jffs2_scan_eraseblock issues
- Application of SPCOMM Control in Serial Communication of Delphi7.0
- Using TComm component to realize serial communication in Delphi environment
- Bar chart code for embedded development practices
- Embedded Development Learning (10)
- Embedded Development Learning (8)
- Embedded Development Learning (6)
Professor at Beihang University, dedicated to promoting microcontrollers and embedded systems for over 20 years.
- Intel promotes AI with multi-dimensional efforts in technology, application, and ecology
- ChinaJoy Qualcomm Snapdragon Theme Pavilion takes you to experience the new changes in digital entertainment in the 5G era
- Infineon's latest generation IGBT technology platform enables precise control of speed and position
- Two test methods for LED lighting life
- Don't Let Lightning Induced Surges Scare You
- Application of brushless motor controller ML4425/4426
- Easy identification of LED power supply quality
- World's first integrated photovoltaic solar system completed in Israel
- Sliding window mean filter for avr microcontroller AD conversion
- What does call mean in the detailed explanation of ABB robot programming instructions?
- STMicroelectronics discloses its 2027-2028 financial model and path to achieve its 2030 goals
- 2024 China Automotive Charging and Battery Swapping Ecosystem Conference held in Taiyuan
- State-owned enterprises team up to invest in solid-state battery giant
- The evolution of electronic and electrical architecture is accelerating
- The first! National Automotive Chip Quality Inspection Center established
- BYD releases self-developed automotive chip using 4nm process, with a running score of up to 1.15 million
- GEODNET launches GEO-PULSE, a car GPS navigation device
- Should Chinese car companies develop their own high-computing chips?
- Infineon and Siemens combine embedded automotive software platform with microcontrollers to provide the necessary functions for next-generation SDVs
- Continental launches invisible biometric sensor display to monitor passengers' vital signs
- The smallest multi-mode stepper motor controller and its applications
- Even if we give the complete set of drawings, Chinese people still cannot build high-end lithography machines?
- System Verilog 1800-2012 Syntax Manual
- aos multithreading and mutex lock
- Now many circuits do not have watchdogs, but some watchdog circuits are necessary. I would like to ask in which applications,...
- Brief discussion: Electromagnetic compatibility (EMC) radio frequency electromagnetic field radiation immunity test plan
- The biggest company in the Metaverse is also laying off employees. It is said to be bigger than Twitter.
- EEWORLD University Hall----Live Replay: ADI Multi-parameter Optical Water Quality Analysis Platform
- Actively expand set-top box applications and cautiously deal with patent issues
- How to improve the frequency accuracy of the resonant circuit?