The PC-based network video surveillance system developed rapidly in the late 1990s and is still the mainstream of video surveillance systems. However, this system has the disadvantages of poor stability, high power consumption, and poor software openness. With the widespread application of embedded systems, embedded network video surveillance systems have emerged, which combine multimedia technology, image processing technology, embedded operating system technology, and network technology to bring video surveillance technology to a new stage. At present, this embedded video surveillance system is gaining more and more applications with its advantages of small size, low power consumption, low cost, high stability, simple operation, and good software openness.
2 Overall Architecture of Embedded Video Surveillance System
This system consists of two parts: video monitoring terminal and monitoring control center system, as shown in Figure 1. The video monitoring terminal consists of an embedded operating system running video monitoring software and a camera. The camera collects video, encodes it using H.263, compresses it with software, encodes it using H.263, and transmits it to the monitoring center using the IP network. It receives control commands from the monitoring center and changes parameters such as the resolution and frame rate of the monitoring image. The monitoring center is generally a computer running the monitoring center software, which receives the compressed video stream from the remote monitoring terminal, decodes it, displays the monitoring screen using the display module, and stores the compressed video stream.
3 Introduction to Traditional Embedded Video Surveillance Terminal Solutions
Generally, embedded systems use a single ARM core chip as the central processor. ARM uses the RISC instruction set, which is suitable for processing control codes, but not for real-time digital signal processing such as voice processing and video encoding and decoding. In network monitoring terminals, there are not only complex control codes, but also quite frequent video encoding tasks. It is difficult for a single ARM core embedded system to handle all the tasks of a video monitoring terminal. Due to the limitation of the digital computing capability of the ARM core itself, the video monitoring terminal implemented by this system will have a very low video encoding frame rate, which cannot meet the human eye's demand for the smoothness of the monitoring video. DSP is a chip specially designed for digital signal processing. It has the real-time signal processing function required for voice and video applications. If the respective advantages of ARM and DSP are used to reasonably allocate the tasks of the video monitoring terminal to the two cores, the performance of the entire system will be greatly improved. The software system block diagram of the video monitoring terminal is shown in Figure 2.
4 Improved design based on OMAP5912
OMAP5912 is an ARM+DSP dual-core processor developed by TI. It integrates the high-efficiency TMS320C55x digital signal processor (DSP) and the high-performance ARM9 RISC microprocessor. Therefore, OMAP5912 can provide the processing power of arithmetic operations required for video compression encoding, and at the same time provide the general performance required for system-level operations. Through a shared memory architecture, DSP and ARM can conveniently delegate functions that require a lot of calculations to DSP components by using the DSP/BIOS Bridge API provided by TI, and execute them asynchronously without occupying the core resources of the ARM processor. For OMAP-based development, software developers can use TI's unique DSP/BIOS Bridge to quickly complete the program development of the entire system without having to write programs for the two processors separately or work in a more difficult DSP program language environment.
According to the tasks to be performed by the embedded video surveillance terminal, the video acquisition module, network transmission module, interface control module and operating system can be handed over to ARM for execution, while the video encoding module can be handed over to DSP alone. ARM controls the execution of video encoding tasks in DSP through the application program interface provided by DSP/BIOS Bridge, and exchanges task operation results and status information with DSP. In this system, the video encoding part of the program can be completed through the standard multimedia application programming interface (MM API) and the multimedia engine, and the related DSP tasks can be completed by DSP/BIOS Bridge through the DSP API interface; finally, DSP/BIOS Bridge coordinates data, I/O flow and DSP task control. The improved video surveillance terminal software system is shown in Figure 3. [page]
During the specific implementation process, special consideration must be given to code optimization of the video encoding algorithm on the DSP side.
First, reasonably allocate the on-chip memory (fast speed but small capacity) and store frequently used variables (such as various coding quantization tables, IDCT coefficients, etc.) in the on-chip memory. Since the amount of original video data is very large, a frame of YUV420 QCIF image requires 37 Kbyte. Therefore, all the data of a frame of image cannot be stored in the chip. A frame of image data can be read from the off-chip memory into the chip for processing multiple times through DMA.
Secondly, try to use the image processing function library provided by TI, namely IMGLIB (it is a library specially developed for image and video processing. TI has deeply optimized the library. Using IMGLIB not only simplifies the development process, but also maximizes the efficiency of the video encoding algorithm).
Third, using some special operation instructions built into DSP (mainly performing some simple arithmetic operations, which are written in optimized assembly code) can improve the efficiency of code execution; finally, in order to make the program execution more efficient, some code optimization techniques can be adopted, such as using more parallel operations, reducing judgment branch transfers, and using multiple loops reasonably. For the convenience of calculation, floating-point numbers can be converted to fixed points, and shift addition and subtraction operations can be used instead of multiplication and division.
This system realizes the coordinated work of dual-core architecture, overcoming the shortcomings of the traditional single ARM core implementation scheme, such as insufficient digital computing capability, complex control code of single DSP core, and poor usability. In actual application, users can experience that the monitoring image quality and frame rate it provides are significantly improved compared with the single ARM core system, and the usability of the system is not reduced due to the addition of DSP core. Below, experimental data will be given to illustrate the gap between the two schemes, so that readers can more intuitively experience the advantages of the improved scheme.
5 Test Results
The test hardware platform is a single-ARM9 core Samsung S3C2410 development board and a DSP+ARM9 dual-core TI OMAP5912OSK development board. The operating system is embedded Linux. The test sequences are foreman and news in QCIF (176×144) format. The encoding algorithm is H.263. The test is carried out at the same bit rate of 128Kbit/s. The test results are listed in Table 1.
As can be seen from Table 1, the improved scheme for embedded video surveillance system proposed in this paper has a greater improvement in video compression efficiency than the traditional scheme. It can greatly improve the encoding frame rate of surveillance video without increasing bandwidth requirements, and basically meet the human eye's requirements for video fluency. The effect diagram of the whole system is shown in Figure 4. The embedded video surveillance terminal part is controlled and operated by Telnet client software Tera Term. The monitoring screen is viewed through the IE browser that comes with the PC Windows operating system.
6 Conclusion
The OMAP platform has a unique dual-core structure. This article makes full use of the characteristics of the OMAP dual-core to improve the embedded network monitoring terminal, thereby improving the practicality of the embedded monitoring terminal. In addition, it briefly explains the software optimization development method of OMAP, hoping that it will serve as a reference for developers using OMAP.
Previous article:An Improved Embedded Network Video Surveillance System
Next article:Small Linux system production and transplantation based on 2.6.19 kernel
- Popular Resources
- Popular amplifiers
Professor at Beihang University, dedicated to promoting microcontrollers and embedded systems for over 20 years.
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- MCU C Voice Modular Programming
- [2022 Digi-Key Innovation Design Competition] Material Unboxing STM32F7508DK
- Passed and failed
- The weather is unusual this year. How is the weather where you are now?
- When laying out a PCB, for example, if the wiring of a resistor cannot be routed away, can the wiring be routed between the two pad terminals of the resistor?
- The program cannot write properly to WCH's CH552 T. Please help me.
- The compile buttons of my modelsim and modelsim-altera are both gray
- Hope to release the authority to issue chip coin prestige (at least to the moderators)
- What does defined on the command line mean?
- Lichee RV 86 PANEL Review (1)——Unboxing