Video surveillance systems are widely used in industrial, military and civil fields, playing an important role in the security and environmental monitoring of these industries. Video surveillance systems are gradually moving from analog to digital. With the rapid development of semiconductor technology and the increasing maturity of multimedia video encoding and decoding technology, the application of high-performance and complex video stream compression algorithms in embedded systems has become a reality. Nowadays, surveillance systems are mostly implemented by combining dedicated processors or RISC embedded processors with DSP. This article discusses the implementation of combining ARM processors with software compression.
Overall design of video surveillance system
First, it is necessary to make an overall plan for the system, divide the system into several functional modules, and determine the implementation method of each module. The entire video surveillance system adopts a C/S structure, which is mainly divided into two parts: the server and the client. The server mainly includes the acquisition, compression, and transmission programs running on the S3C2410 platform, and the client is the receiving, decompression, and playback program running on the PC. The video surveillance terminal captures real-time video information from the on-site camera, and transmits it to the video surveillance server via Ethernet after compression.
As shown in the system structure diagram (Figure 1), video image acquisition and packaging and sending are completed on the server side, and image reception, unpacking and playback are completed on the client side.
System hardware design
The system adopts a modular design scheme, which mainly includes the following modules: main controller module, storage circuit module, peripheral interface circuit module, power supply and reset circuit, as shown in Figure 2.
S3C2410 master controller module
The main controller module is the core of the whole system. The S3C2410 processor used is a 16/32-bit microcontroller based on the ARM920T processor core of Samsung. The maximum operating frequency of this processor can reach 203MHz. Its low power consumption, compactness and full static design are particularly suitable for applications that are sensitive to cost and power consumption. S3C2410 provides rich on-chip resources and supports Linux, making it a suitable choice for this system. It can complete the scheduling of the entire system, configure the function registers of all chips that need to work when the system is powered on, complete the encoding of the video stream, and control the physical layer chip to send the video stream through the Ethernet controller.
System storage circuit module
The main controller also needs some peripheral storage units such as Nand Flash and SDRAM. Nand Flash contains Linux Bootloader, system kernel, file system, application program, environment variables and system configuration files, etc. SDRAM has fast read and write speed and is used as a memory unit when the system is running. The design uses 64M Nand Flash and 64M SDRAM.
Peripheral circuit module
The peripherals used in this design include USB interface, network card interface, RS232 interface and JTAG interface.
The USB host controller module of the video surveillance terminal is connected to multiple USB cameras through a dedicated USB hub. In real-time monitoring, the image data captured by each camera is transmitted to the USB host controller module of the video surveillance terminal through the USB hub, and then the USB host controller module is handed over to the S3C2410 processor for centralized processing. The S3C2410 performs real-time encoding and compression on the captured images, and the encoded code stream is directly transmitted to the sending buffer and waits for sending.
This design uses CS8900A to expand the network interface. It is a 16-bit Ethernet controller produced by CIRRUS LOGIC. It adapts to different application environments through the setting of internal registers. S3C2410 controls and communicates with the CS8900A network chip through address, data, control lines and chip select signal lines. The connection between CS8900A and S3C2410 is shown in Figure 3. CS8900A is selected by the nGCS3 signal of S3C2410. The INTRQ0 terminal of CS8900A is used to generate an interrupt signal and is connected to the 16-bit data bus of S3C2410. The address line uses A[24:0].
The CS8900A Ethernet control chip transmits data through the DMA channel. First, set the parameters of the transmission control and transmission address registers, read data from the specified data storage area in turn, send it to the internal transmission buffer, and use MAC to encapsulate and send the data. After a group of data is sent, request a DMA interrupt, which is processed by S3C2410.
The RS-232 interface is connected to the PC serial bus to display and control the relevant information of the embedded system through the PC. The JTAG interface is mainly used to debug the system and burn the program into the Flash.
System software design
The software design of the video surveillance terminal mainly completes two aspects of work:
(1) Build a software platform on the hardware. Building an embedded Linux software development platform requires completing tasks such as UBOOT transplantation, embedded Linux operating system kernel transplantation, and the development of device drivers for the embedded Linux operating system.
(2) Based on the software platform, develop system applications. With the help of cross-compilation tools, develop acquisition, compression, and transmission programs running on video surveillance terminals.
Building a Linux platform based on S3C2410
Linux has many advantages, such as open source; a powerful kernel that supports multi-user, multi-threading, multi-process, good real-time performance, powerful and stable functions; customizable size and functions; and support for multiple architectures.
To build an embedded Linux development platform, you need to build a cross-compilation environment first, as shown in Figure 4. A complete cross-compilation environment includes a host and a target machine. In the development, the host is a PC with Red Hat's FedoreCore 2 operating system, and the target machine is a video surveillance terminal based on S3C2410. The cross compiler selected is GCC3.3.4 for ARM, and the embedded Linux kernel source code package version number is 2.6.8RC.
The 2.6.8RC version of the Linux kernel source code package contains all the functional modules. Only a part of them is used in the system. Therefore, before compiling the kernel, you must first configure the kernel and cut off the redundant functional modules. Only after the customized kernel meets the system design. The specific steps are as follows:
(1) Type the command make menuconfig to configure the kernel, select the YAFFS file system, and support NFS boot. The system uses a USB camera, so you need to enable the USB device support module, including the USB device file support module and the USB host controller driver module. In addition, the USB camera is a video device, so in order for the application to access it, you also need to enable the Video4Linux module.
(2) Use the make dep command to generate dependencies between kernel programs.
(3) The make zImage command generates the kernel image file.
(4) The make modules and make modules_install commands generate system loadable modules.
This will generate the zImage kernel image file and download it to the Flash of the target platform.
This design uses a USB external camera, which is required to be loaded as a module during kernel configuration. First, the driver needs to be completed. The driver needs to provide the implementation of basic I/O operation interface functions such as open, read, write, and close, the processing implementation of interrupts, memory mapping functions, and the control interface function ioctl for I/O channels, and define them in struct file_operations. In this way, when the application performs system call operations such as open, close, read, write, etc. on the device file, the embedded Linux kernel will access the functions provided by the driver through the file_operations structure. Then compile the USB driver into a module that can be dynamically loaded, so that the camera can work normally.
Design of Video Monitoring Terminal Software
The video surveillance terminal software is divided into three parts according to its functions: video acquisition, compression, and transmission. The development of this software is based on the previously configured embedded kernel.
(1) Video acquisition part
Use Video4Linux interface functions to access USB camera devices and capture real-time video streams. First, complete the definition of the v4l_struct data structure, such as basic device information, image attributes, and various signal source attributes. The acquisition module collects images from the USB camera through the USB hub, and starts multiple acquisition threads to listen on different ports. Once a connection request is received, the acquisition thread immediately reads the video stream data from the device buffer and puts it into the video processing buffer for the next step of processing.
(2) Compression of video data
In the video surveillance system, a large amount of data needs to be transmitted through the network. In order to ensure the transmission quality and real-time transmission, it is necessary to encode and compress it before transmission to reduce the amount of data. This article uses the MPEG-4 encoding standard for data compression. You can download the open source xvidcore software on the Internet as the core algorithm for video compression. Xvidcore is an efficient and highly portable multimedia encoding software. Cross-compile it on the PC and copy the generated file to the target system.
(3) Video data transmission part
The function of the transmission module is to transmit the compressed video stream to the remote PC client. The transmission of video stream data is based on the TCP/IP protocol. Video transmission uses the standard RTP transmission protocol. RTP is currently the best way to solve the problem of real-time transmission of streaming media. To perform real-time streaming programming on the Linux platform, you need to use some open source RTP libraries, such as LIBRTP, JRTPLIB, etc. Define a relatively simple handshake protocol: the acquisition program on the PC side keeps sending request data packets to the acquisition terminal, and the acquisition terminal packages the captured images and returns them to the host. Each RTP information packet is encapsulated in a UDP message segment, and then encapsulated in an IP data packet and sent out. The receiver automatically assembles the received data frames and restores them to video data.
Previous article:Design of 3G video helmet based on ARM11 and DSP
Next article:Research and Design of Wireless Serial Hub Based on ZigBee
Recommended ReadingLatest update time:2024-11-15 13:37
- Learn ARM development(16)
- Learn ARM development(17)
- Learn ARM development(18)
- Embedded system debugging simulation tool
- A small question that has been bothering me recently has finally been solved~~
- Learn ARM development (1)
- Learn ARM development (2)
- Learn ARM development (4)
- Learn ARM development (6)
Professor at Beihang University, dedicated to promoting microcontrollers and embedded systems for over 20 years.
- LED chemical incompatibility test to see which chemicals LEDs can be used with
- Application of ARM9 hardware coprocessor on WinCE embedded motherboard
- What are the key points for selecting rotor flowmeter?
- LM317 high power charger circuit
- A brief analysis of Embest's application and development of embedded medical devices
- Single-phase RC protection circuit
- stm32 PVD programmable voltage monitor
- Introduction and measurement of edge trigger and level trigger of 51 single chip microcomputer
- Improved design of Linux system software shell protection technology
- What to do if the ABB robot protection device stops
- Wi-Fi 8 specification is on the way: 2.4/5/6GHz triple-band operation
- Wi-Fi 8 specification is on the way: 2.4/5/6GHz triple-band operation
- Vietnam's chip packaging and testing business is growing, and supply-side fragmentation is splitting the market
- Vietnam's chip packaging and testing business is growing, and supply-side fragmentation is splitting the market
- Three steps to govern hybrid multicloud environments
- Three steps to govern hybrid multicloud environments
- Microchip Accelerates Real-Time Edge AI Deployment with NVIDIA Holoscan Platform
- Microchip Accelerates Real-Time Edge AI Deployment with NVIDIA Holoscan Platform
- Melexis launches ultra-low power automotive contactless micro-power switch chip
- Melexis launches ultra-low power automotive contactless micro-power switch chip
- [Xianji HPM6750 Review Part 3] Dual-core Application Startup Analysis
- Summary of experience in RF direction - it is indeed difficult
- cc2531-usbDongle communicates with PC (similar to serial port debugging assistant)
- PCB grounding design specifications worth seeing!
- Serial port screen selection sharing
- How to modify the servo arm? ! ! Please help! !
- Can you guys give me some suggestions, and diagrams?
- The running light delay is realized by the single-chip timer, and the running light style is changed by external interrupt
- Implementation of a Super-resolution Direction Finding Algorithm for Spatial Spectrum Estimation Based on High-speed DSP Series Processors
- IAR FOR MSP430 V7 simulation problem, please solve