An Improved Embedded Network Video Surveillance System

Publisher:温柔心绪Latest update time:2013-04-06 Source: dzsc Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere
     1 Introduction

      The PC-based network video surveillance system developed rapidly in the late 1990s and is still the mainstream of video surveillance systems. However, this system has the disadvantages of poor stability, high power consumption, and poor software openness. With the widespread application of embedded systems, embedded network video surveillance systems have emerged, which combine multimedia technology, image processing technology, embedded operating system technology, and network technology to bring video surveillance technology to a new stage. At present, this embedded video surveillance system is gaining more and more applications with its advantages of small size, low power consumption, low cost, high stability, simple operation, and good software openness.

      2 Overall Architecture of Embedded Video Surveillance System

      This system consists of two parts: video monitoring terminal and monitoring control center system, as shown in Figure 1. The video monitoring terminal consists of an embedded operating system running video monitoring software and a camera. The camera collects video, encodes it using H.263, compresses it with software, encodes it using H.263, and transmits it to the monitoring center using the IP network. It receives control commands from the monitoring center and changes parameters such as the resolution and frame rate of the monitoring image. The monitoring center is generally a computer running the monitoring center software, which receives the compressed video stream from the remote monitoring terminal, decodes it, displays the monitoring screen using the display module, and stores the compressed video stream.    

      3 Introduction to Traditional Embedded Video Surveillance Terminal Solutions

      Generally, embedded systems use a single ARM core chip as the central processor. ARM uses the RISC instruction set, which is suitable for processing control codes, but not for real-time digital signal processing such as voice processing and video encoding and decoding. In network monitoring terminals, there are not only complex control codes, but also quite frequent video encoding tasks. It is difficult for a single ARM core embedded system to handle all the tasks of a video monitoring terminal. Due to the limitation of the digital computing capability of the ARM core itself, the video monitoring terminal implemented by this system will have a very low video encoding frame rate, which cannot meet the human eye's demand for the smoothness of the monitoring video. DSP is a chip specially designed for digital signal processing. It has the real-time signal processing function required for voice and video applications. If the respective advantages of ARM and DSP are used to reasonably allocate the tasks of the video monitoring terminal to the two cores, the performance of the entire system will be greatly improved. The software system block diagram of the video monitoring terminal is shown in Figure 2.   

      4 Improved design based on OMAP5912

      OMAP5912 is an ARM+DSP dual-core processor developed by TI. It integrates the high-efficiency TMS320C55x digital signal processor (DSP) and the high-performance ARM9 RISC microprocessor. Therefore, OMAP5912 can provide the processing power of arithmetic operations required for video compression encoding, and at the same time provide the general performance required for system-level operations. Through a shared memory architecture, DSP and ARM can conveniently delegate functions that require a lot of calculations to DSP components by using the DSP/BIOS Bridge API provided by TI, and execute them asynchronously without occupying the core resources of the ARM processor. For OMAP-based development, software developers can use TI's unique DSP/BIOS Bridge to quickly complete the program development of the entire system without having to write programs for the two processors separately or work in a more difficult DSP program language environment.

      According to the tasks to be performed by the embedded video surveillance terminal, the video acquisition module, network transmission module, interface control module and operating system can be handed over to ARM for execution, while the video encoding module can be handed over to DSP alone. ARM controls the execution of video encoding tasks in DSP through the application program interface provided by DSP/BIOS Bridge, and exchanges task operation results and status information with DSP. In this system, the video encoding part of the program can be completed through the standard multimedia application programming interface (MM API) and the multimedia engine, and the related DSP tasks can be completed by DSP/BIOS Bridge through the DSP API interface; finally, DSP/BIOS Bridge coordinates data, I/O flow and DSP task control. The improved video surveillance terminal software system is shown in Figure 3. [page]

      During the specific implementation process, special consideration must be given to code optimization of the video encoding algorithm on the DSP side.

      First, reasonably allocate the on-chip memory (fast speed but small capacity) and store frequently used variables (such as various coding quantization tables, IDCT coefficients, etc.) in the on-chip memory. Since the amount of original video data is very large, a frame of YUV420 QCIF image requires 37 Kbyte. Therefore, all the data of a frame of image cannot be stored in the chip. A frame of image data can be read from the off-chip memory into the chip for processing multiple times through DMA.

      Secondly, try to use the image processing function library provided by TI, namely IMGLIB (it is a library specially developed for image and video processing. TI has deeply optimized the library. Using IMGLIB not only simplifies the development process, but also maximizes the efficiency of the video encoding algorithm).

      Third, using some special operation instructions built into DSP (mainly performing some simple arithmetic operations, which are written in optimized assembly code) can improve the efficiency of code execution; finally, in order to make the program execution more efficient, some code optimization techniques can be adopted, such as using more parallel operations, reducing judgment branch transfers, and using multiple loops reasonably. For the convenience of calculation, floating-point numbers can be converted to fixed points, and shift addition and subtraction operations can be used instead of multiplication and division.

      This system realizes the coordinated work of dual-core architecture, overcoming the shortcomings of the traditional single ARM core implementation scheme, such as insufficient digital computing capability, complex control code of single DSP core, and poor usability. In actual application, users can experience that the monitoring image quality and frame rate it provides are significantly improved compared with the single ARM core system, and the usability of the system is not reduced due to the addition of DSP core. Below, experimental data will be given to illustrate the gap between the two schemes, so that readers can more intuitively experience the advantages of the improved scheme.

      5 Test Results

      The test hardware platform is a single-ARM9 core Samsung S3C2410 development board and a DSP+ARM9 dual-core TI OMAP5912OSK development board. The operating system is embedded Linux. The test sequences are foreman and news in QCIF (176×144) format. The encoding algorithm is H.263. The test is carried out at the same bit rate of 128Kbit/s. The test results are listed in Table 1.   

      As can be seen from Table 1, the improved scheme for embedded video surveillance system proposed in this paper has a greater improvement in video compression efficiency than the traditional scheme. It can greatly improve the encoding frame rate of surveillance video without increasing bandwidth requirements, and basically meet the human eye's requirements for video fluency. The effect diagram of the whole system is shown in Figure 4. The embedded video surveillance terminal part is controlled and operated by Telnet client software Tera Term. The monitoring screen is viewed through the IE browser that comes with the PC Windows operating system.    

      6 Conclusion

      The OMAP platform has a unique dual-core structure. This article makes full use of the characteristics of the OMAP dual-core to improve the embedded network monitoring terminal, thereby improving the practicality of the embedded monitoring terminal. In addition, it briefly explains the software optimization development method of OMAP, hoping that it will serve as a reference for developers using OMAP.

Reference address:An Improved Embedded Network Video Surveillance System

Previous article:An Improved Embedded Network Video Surveillance System
Next article:Small Linux system production and transplantation based on 2.6.19 kernel

Latest Microcontroller Articles
  • Download from the Internet--ARM Getting Started Notes
    A brief introduction: From today on, the ARM notebook of the rookie is open, and it can be regarded as a place to store these notes. Why publish it? Maybe you are interested in it. In fact, the reason for these notes is ...
  • Learn ARM development(22)
    Turning off and on interrupts Interrupts are an efficient dialogue mechanism, but sometimes you don't want to interrupt the program while it is running. For example, when you are printing something, the program suddenly interrupts and another ...
  • Learn ARM development(21)
    First, declare the task pointer, because it will be used later. Task pointer volatile TASK_TCB* volatile g_pCurrentTask = NULL;volatile TASK_TCB* vol ...
  • Learn ARM development(20)
    With the previous Tick interrupt, the basic task switching conditions are ready. However, this "easterly" is also difficult to understand. Only through continuous practice can we understand it. ...
  • Learn ARM development(19)
    After many days of hard work, I finally got the interrupt working. But in order to allow RTOS to use timer interrupts, what kind of interrupts can be implemented in S3C44B0? There are two methods in S3C44B0. ...
  • Learn ARM development(14)
  • Learn ARM development(15)
  • Learn ARM development(16)
  • Learn ARM development(17)
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号