Implementation of wireless vehicle video surveillance based on Au1200

Publisher:HuanleLatest update time:2012-03-07 Keywords:Au1200 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere
introduction

With the development of wireless broadband networks, the construction of digital mobile television, and the application of multimedia technology, wireless in-vehicle media processing systems integrating multiple functions have emerged. They can be widely used in transportation systems such as railways, subways, and passenger cars.

This paper briefly introduces the design of an embedded wireless vehicle-mounted media processing system based on the Au1200 processor, which includes multiple functions such as wireless transmission, video playback, and video monitoring. It describes in detail the soft compression method based on ffmpeg, and designs and implements the video acquisition and video compression of the video monitoring part of the vehicle-mounted system.

1 Introduction to Wireless In-Vehicle Media Processing System

The network architecture of the wireless in-vehicle media processing system based on Au1200 is shown in Figure 1. It adopts a client/server architecture and consists of three parts: the in-vehicle client, the station server and the communication network. Among them, the client uses AlchemyTM Au1200 as the main control chip. This chip uses the MIPS32 core and is a low-power, high-performance embedded processing chip designed for application fields such as digital multimedia players and automotive infotainment systems. Taking advantage of Au1200's advantages in media processing and its rich peripheral interfaces, the system block diagram of the wireless in-vehicle media processor is shown in Figure 2. Users can control the entire system's wireless transmission, video playback, video monitoring and other functions through buttons.

Network architecture of wireless vehicle media processing system based on Au1200

The server side uses a general PC server to provide multimedia resource management and wireless network services for the entire system.

Each vehicle equipped with a wireless on-board media processor can connect to the server via a wireless network and transmit media resources in a specified manner.

2 Hardware Design of Wireless Vehicle Video Surveillance

The Au1200 embedded processor uses the MIPS32 core and can support multiple media formats including MEPG-1, MPEG-2, MPEG-4, WMV9, H.263, MP3, WMA, ASF, AVI and JPEG. It integrates a dedicated media acceleration engine (MAE) and does not require an external DSP, thus simplifying the programming environment and reducing components. It has rich on-chip resources and external interfaces, and supports USB 2.0, IDE, CCIR656 camera and other interfaces. The hardware block diagram of wireless vehicle video monitoring is shown in Figure 3.

Wireless vehicle video surveillance hardware block diagram

These include:

(1) Video monitoring input part: Omni Vision's color CMOS image sensor OV9650 is used as the system's video input device. The Au1200's built-in CIM (Camera Interface Module) interface can be used to easily control the OV9650. The OV9650 camera's working mode is configured through I2C, and the collected video data is read in through the CIM interface and mapped to the memory unit. The video data stream is processed according to different flow requirements to achieve the system's recording, storage, playback, and transmission functions.

(2) Video surveillance data storage: The collected video data is compressed into files in real time according to the specified video format and stored on the hard disk through the ffmpeg software compression method.

(3) Real-time playback of video surveillance data: After the collected video data is mapped to the memory, the RGB video data is directly output to the LCD buffer, and the real-time playback of the surveillance video in the LCD device can be realized. The system also supports real-time playback of the video output to the display device with a VGA interface. At this time, the digital video data needs to be converted into analog video data. The ADV7123 chip of ADI can be used to realize high-speed video digital/analog conversion with three high-speed and 10-bit inputs. The three DACs can process red, green, and blue video data respectively to realize high-resolution display of the analog display terminal.

(4) Remote transmission of video surveillance: Connect to the wireless LAN through the wireless network card (supporting 802.11b/g) of the client system, and upload the video files in the hard disk to the server for storage wirelessly.

(5) User control part: The system is designed with function buttons, which are connected to the GPIO ports that support interrupts. Users can conveniently control the video surveillance by pressing buttons.

3 Software Design of Wireless Vehicle Video Surveillance

The software part of wireless vehicle video surveillance mainly includes:

(1) Kernel and driver

The system uses Linux 2.6.11 kernel as the kernel of the system. Through the make menuconfig command, the kernel and driver are configured according to user needs to generate a kernel file image. The device driver is closely related to the system hardware and is modularly integrated into the kernel after modification and debugging. The drivers involved include user key driver, camera driver, LCD/VGA display driver, hard disk driver, player-related MAE driver and wireless network card driver related to wireless transmission. Here we focus on the key part of the system camera driver.

In the camera driver, define the data structure cim_cmos_cam-era_config to describe the camera device:

Through the above data structure, various parameters of the camera device can be effectively described. Among them, config_cmd is a hexadecimal array, corresponding to the camera configuration registers of different working modes.

The driver provides an API interface for the upper-level application, which makes system calls through the corresponding member functions in the file_operations data structure. In the upper-level program, the corresponding member function pointer of fileoperations in the driver is read by function calls to complete the corresponding function functions.

The following definitions are made in the system's CIM interface driver:

Its member functions respectively complete the basic functions of opening, controlling, reading, writing, mapping and releasing the camera device connected to the Au1200 CIM interface. For example, to configure the camera through ioctl operation:

(2) Library and protocol stack

In order to support the operation of upper-level applications, the software system must also include many related function libraries and protocol stacks, including the basic function library related to system operation, the audio and video encoding library related to audio and video encoding and decoding, the Qtopia library related to the user interface, and the protocol stack composed of various interface protocols.

(3) Application layer

The software flow chart of video surveillance is shown in Figure 4. After starting the camera, the collected video data is mapped to the memory. Through the output control, the flow direction of the video data is selected. Among them, the data processing part mainly involves video compression processing related to ffmpeg.

Video surveillance software flow chart

(4) User control part

The system uses Qtopia-core-opensource-4.2.2 to develop the user interface. Because of the key control method, the key driver needs to be added to the QT library. Through the interrupt method, the key signal can be captured, and the signal transmission can be controlled through the signal and slot mechanism in QT, so that when each button is pressed, a function in the user program is triggered and the corresponding signal is passed out, thereby realizing the control of the entire system.

4 Soft compression method based on ffmpeg

ffmpeg is an open source and free software that provides a complete solution for recording, converting, and streaming audio and video. Using ffmpeg soft compression, the collected video data is compressed in real time without adding additional hardware overhead to the system. In the process of writing video processing applications, it is necessary to describe the video data according to the data structure defined by ffmpeg, call various library functions in ffmpeg, compress the video data collected by the camera according to the set format, and save it as a video file. The processing flow is shown in Figure 5.

These include:

(1) ffmpeg initialization: define data structures related to video processing such as AVFormatContext, AVOutputFormat, AVStream, AVCodecContext, AVCodec, AVFrame, AVPicture, etc., and initialize the corresponding data structures through functions such as av_register_all() and av_alloc_format_context().

(2) Compression parameter setting: mainly involves video compression related parameters, such as frame rate, video resolution, encoding type, etc., which are set through the av_set_parameters() function.

(3) Image format conversion and video data filling: According to the video surveillance requirements, the image format is converted using functions such as img_convert(), and the converted video data is filled into the AVPicture data structure through functions such as avpicture_fill() and fill_yuv_image() for use by the encoder.

(4) ffmpeg video encoding: Use functions such as avcodec_encode_rid-eo() and av_rescale_q() to call the ffmpeg encoding library for video encoding.

(5) File storage operation: Use functions such as av_write_header(), av_write_frame(), and av_write_trailer() to write the compressed video data into the specified file.

(6) End of acquisition: After the acquisition is completed, use functions such as avcodec_close() and av free() to release memory resources and exit the program.

ffmpeg provides a feature-rich audio and video function library, including libav-codec, libavformat, libavdevice, libavfilter, libavutil and libswscale, which can provide users with many audio and video processing related operations. The ffmpeg related functions called in system applications mainly rely on the libavformat library (a library that supports parsers and generators of all common audio and video formats) and the libavcodec library (an efficient and highly reusable audio/video codec library).

5 Conclusion

This paper introduces the design and implementation of wireless vehicle-mounted video surveillance based on Au1200, focusing on the implementation of video data compression and storage based on ffmpeg soft compression method, which is valuable in practical applications.

Keywords:Au1200 Reference address:Implementation of wireless vehicle video surveillance based on Au1200

Previous article:CDMA short message realizes wireless transmission of data
Next article:Design and advantages of RFID tracking management for shopping mall trolleys

Latest Analog Electronics Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号