Design of embedded video monitoring and transmission system based on SAA6752HS

Publisher:skyshoucangLatest update time:2012-04-06 Source: 舰船电子工程 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

Introduction

70% of the information received by humans is video information. Compared with voice and text information, video information is more intuitive, has a larger amount of information, and its processing and transmission technology is more complex. As an application field of video technology, video surveillance system plays an important role in military security and other fields. At present, video surveillance systems mainly use two technologies: analog and digital. Digital video technology can not only reduce the distortion caused by video transmission, but also analyze, identify and extract effective information from video information. Therefore, with the advancement of digital technology, digital video surveillance will become the future development direction.

A core technology of digital video surveillance is video compression technology. Video signals contain a large amount of data information. By compressing the information data, the amount of information is compressed and stored and transmitted in a compressed form, which not only reduces the storage space, but also improves the transmission efficiency of the communication trunk line. At the same time, it also enables computers to process audio and video information in real time to ensure the playback of high-quality video and audio programs. The MPEG-2 standard was officially announced by the Moving Picture Experts Group in 1995. Its usefulness is to make motion video and audio data a computer-processable data form, and can be stored on various storage media, can be sent and received on existing or future networks, and can be transmitted on existing or future broadcast channels. This article discusses a digital video surveillance system that uses the MPEG-2 compression standard.

MPEG-2 video data compression principle

The principle of MPEG-2 image compression is to use two characteristics in the image: spatial correlation and temporal correlation. Discrete cosine transform (DCT) coding technology is used to reduce spatial data correlation; motion estimation and prediction technology reduces the temporal correlation of adjacent frame pixels to achieve inter-frame data compression. The MPEG-2 basic codec model is shown in Figure 1. The upper half of the dotted line in the figure completes the video encoding function, and the lower half of the dotted line completes the video decoding function.

The preprocessor in the encoder filters the noise in the original image and divides the image into macroblocks. For intra-frame coding, macroblocks are transformed by DCT, quantized, and variable-length coded before outputting compressed video data; for inter-frame coding, motion estimation estimates and compensates for the motion of macroblocks to reduce the temporal redundancy of the image; the image predictor predicts the current frame based on the reference frame, and the difference between the two is transformed by DCT, quantized, and variable-length coded before outputting. Since the difference is small, the number of bits required for coding can be greatly reduced; the purpose of inverse quantization and IDCT is to complete the reproduction of the reference frame. In the decoder, variable-length decoding, inverse quantization, and IDCT reconstruct the intra-frame coded image, and the image predictor completes the reproduction of the inter-frame coded image based on the reference frame and motion vector.



Figure 1 MPEG-2 codec model diagram [page]


System Design

System hardware design

System overall structure

The system mainly consists of two parts: one is the analog video data decoding and digital video data encoding subsystem. The other is the video code stream Ethernet transmission subsystem. In addition, there are power supply subsystem, JTAG debugging subsystem, etc., which will not be described in detail here. The basic structure diagram of the system is shown in Figure 2.



Figure 2 System Schematic Diagram


The system first converts the external signal into a PAL composite video signal through an analog camera, then the analog video decoding and digital video encoding subsystem compresses the PAL composite video signal of the analog camera into an MPEG-2 format video stream, and finally, the DSP controller transmits the video stream packet through the Ethernet transmission subsystem with a bandwidth of 100M. Analog Video

Decoding and Digital Video Encoding Subsystem

The task of the analog video decoding part is to sample and quantize the input PAL video signal and convert it into a standard digital video signal. We chose the SAA7114 chip from PHILIPS. SAA7114 is a high-performance single-chip NTSC/PAL/SECAM composite video decoder with low power consumption, low price, excellent three-line adaptive comb filter that can overcome the artificial traces of traditional filters and ensure full-screen video resolution, flexible pixel port, simple peripheral circuit and easy programming.

In the system, SAA7114 converts the signal obtained after decoding the analog signal into the ITU-RBT.656 digital video format, which can be directly connected to SAA6752, as shown in Figure 3. The video output port of SAA7114 in the system is X-port as the data output port. The signals of X-port are divided into the following categories:

a. Data signal XPD7-XPD0: output the decoded data value;

b. Clock signal XCLK: as the system reference clock signal;



Figure 3 Interface between SAA7114 and SAA6752 [page]

c. Line synchronization signal XRH and field synchronization signal XRV: When the output is line synchronization signal and field synchronization signal, these two signals are valid;

e. XDRI controls whether X-port is used as input port or output port.

RTS0 outputs the flag bit of odd field or even field.

The MPEG-2 video encoding part is the key part of the whole system. It compresses the data in ITU-RBT.656 format into the MPEG2 transport stream (TS) of ISO/IEC 13818. This part uses the SAA6752HS chip of PHILIPS. It is a highly integrated and low-cost single-chip audio and video encoding chip that can perform all the functions of video encoding, noise filtering and motion estimation. In addition, the SAA6752HS can be controlled through the I2C bus, so the SAA6752HS only needs to transmit a small amount of encoding parameters from the I2C bus of the main control processor to start the encoding work. The connection between SAA6752 and TMS320VC5502 master DSP is shown in Figure 4.

The functions of the output interface pins of SAA6752 are as follows:

a. PDO[7. . 0] outputs data;
b. PDIOCLK outputs reference clock, which can be set to 9MHz or 6.75MHz;
c. PDOAV signal indicates whether the output is video data or audio data;
d. PDOVAL signal indicates whether the output data is valid;
e. PDOSYNC indicates that the output is the first byte of the data packet.



Figure 4 Connection diagram of SAA6752HS and TMS320VC5502


It should be noted that the shift register, FIFO, counter (modulo 4), and counter 2 (modulo 47) in Figure 4 are implemented using FPGA. The DSP controller we selected is the TMS320VC5502 ('5502) chip from TI, which has a core voltage of only 1.2V, a power consumption of only 0.05mW/MIPs, and a performance of 600MIPS. This chip is particularly suitable for systems with high data rates, large computing volume, and low power consumption. At the same time, TMS320VC5502 integrates a wealth of peripherals, such as a 32-bit external memory interface (EMIF), which can be connected to the network card in the system; the internal integrated I2C bus interface facilitates the control of Philips' video chip. In addition, it also provides a JTAG port to the outside, which makes it easier to debug the system by relying on the JTAG emulator and TI's latest DSP development platform CCS.

Video stream Ethernet transmission subsystem

The Ethernet interface is mainly used to transmit the packaged data to the remote host. We use SMSC's LAN91C111 chip, which is a high-performance non-PCI 10M/100M Ethernet interface chip. LAN91C111 adopts a flow I/O working mode. The so-called flow IO mode was once designed for disk and processor interface. It has higher efficiency and is more convenient to use than ISA's DMA mode. LAN91C111 also has strong data processing capabilities, and its theoretical maximum data processing capacity is 320Mbit/s = 40MB/s. Because LAN91C111 has MMU function, the whole system has higher network performance and lower system overhead.



Figure 5 LAN91C111 interface diagram


The network interface composed of LAN91C111 is shown in Figure 5. LAN91C111 transmits the data from the CPU through the local bus interface, encapsulates the data through the internal MAC controller, and the physical layer transceiver (PHY) transmits the data to the pulse transformer, so that the errors caused by noise can be minimized to ensure the correct transmission of data. At the same time, the physical layer transceiver of LAN91C111 can also receive data packets on the network, and MAC can perform carrier sensing, collision detection protocol and CRC check functions; in addition, LAN91C111 also has a large buffer to improve the efficiency of the entire circuit, and the LED controller can identify the status of the network interface. The host-side interface of LAN91C111 is relatively flexible. It can support asynchronous or synchronous transmission, and can also support non-trigger mode and trigger mode (Burst mode) transmission. In order to improve the throughput of the system, when the DSP transmits data to the network card, it uses the synchronous trigger transmission method through the DSP's EMIF interface, and uses direct 32-bit data transmission. The data does not pass through the BIU, but is directly written into the network card FIFO. This is a more reasonable way to achieve high-speed connection between TMS320VC5502 and LAN91C111. [page]

System software design

The software of this system consists of two parts, one is the hardware driver, and the other is the control logic design. The following is a detailed description.

System driver design

The CCS interface is simple and clear, easy to operate, and powerful, which greatly reduces the difficulty of development. The system driver consists of multiple modules such as TMS320VC5502 initialization program, SAA7114A, SAA6752HS and LAN91C111 configuration program, digital video MPEG2 code stream receiving storage package program, MPEG2 code stream Ethernet sending program and system master program. Figure 6 is the software system block diagram.



Figure 6 Software system block diagram


The initialization program of TMS32VC5502 completes the setting of the stack and running status bit, interrupt enable bit of 'C5502, and the setting of DSP core frequency. The configuration program of SAA7114 and SAA6752HS completes the setting of 'C5502 through its I2C interface, so that it can perform analog decoding and digital encoding normally. The initialization program of LAN91C111 completes the setting of some registers of the chip, so that it can operate normally. The digital video MPEG2 code stream receiving and storing packet program is to use 'C5502 to convert the data transmitted by SAA6752HS into data packets on the network according to the RTP protocol standard, and then send the data packets through the MPEG2 code stream Ethernet sending program. The DMA transmission method is used for data transmission, which can save a lot of time and improve the efficiency of system operation.

In the entire driver design, the timing is very critical. For example, the configuration program of SAA7114 requires the use of the I2C bus, which has very strict timing requirements. If the data is transmitted immediately without a certain delay after the START command is issued, the DSP speed is too fast, which may cause the previous and next data to be overwritten, resulting in program errors. In addition, since the system is processed in real time, interrupt program design is essential. There are several issues to pay attention to when designing the program:

First, when the external interrupt signal is unstable, such as the waveform jumping back and forth, too many glitches, etc., it is possible to detect the interrupt, but fail to enter the interrupt service program.

Second, if the program is executed in a single step, this will cause the emulator to fail and fail to detect the interrupt.

Third, before modifying the two interrupt vector pointers IVPD and IVPH, you should make sure that:

a. Disable all maskable interrupts (INTM = 1). This prevents a maskable interrupt from being generated before modifying the interrupt vector pointer to point to the new interrupt vector.
b. Each hardware non-maskable interrupt has an interrupt vector and an interrupt service routine for the old and new IVPD values. This prevents illegal instruction code from being generated when a hardware non-maskable interrupt is generated during the IVPD modification process.

FPGA logic design

The system uses an Altera ACEXEP1K30QC208-3Q FPGA, which is mainly used to implement the interface between SAA6752 and TMS320VC5502. As shown in the connection diagram 4 of SAA6752HS and TMS320VC5502, the shift register is responsible for expanding the 8-bit data output by SAA6752HS to 32-bit width to improve data throughput efficiency. The shift register is controlled by the clock signal PDIOCLK, the valid bit signal PDOVAL and the audio and video flag PDOAV. FIFO is a 32-bit buffer. TMS320VC5502 controls the read operation of FIFO data through the chip select signal CE3 and the read signal ARE/SRE/SDRE.

When SAA6752HS adopts the data structure of transport stream, after reading 4 data packets (each data packet is 188 bytes), counter 2 will generate an interrupt, and TMS320VC5502 will read these data packets from FIFO, complete channel coding and transmit them to the network. FPGA logic is designed based on this idea. In order to improve the stability of the system and avoid cumulative errors, a dual FIFO switching method is adopted in the logic. When the system starts running, it only writes to FIFO1. When 4 data packets are written, an interrupt is generated to notify DSP to read data; then, the system automatically switches to write to FIFO2. At this time, DSP is reading data in FIFO1 at the same time until 4 data packets are written to FIFO2 (because the speed of DSP reading data is much faster than the speed of SAA6752 writing data to FIFO, FIFO1 has been read empty at this time), and then an interrupt is generated again; then, DSP reads data in FIFO2 and writes to FIFO1 at the same time, switching in turn.



Figure 7 Timing diagram of output transport stream packets in DIO master mode


The output of SAA6752 adopts DIO master mode, and the timing of its output port is shown in Figure 7. Each data packet contains 188 bytes, which is the characteristic of MPEG2 transport stream packet. Its logic is designed under MAX+PLUSII and runs normally.

Conclusion

This paper has conducted a relatively in-depth study on digital video compression and network transmission, and made a preliminary and beneficial exploration on the design and implementation of embedded network video servers, which has both experience and lessons. Due to time and other factors, there are still many areas that need to be improved in the research work. The optimization of the performance of the non-PCI structure network interface and the specific implementation of the client software and the formation of the prototype of the embedded network video server are our next goals.

Reference address:Design of embedded video monitoring and transmission system based on SAA6752HS

Previous article:A method to compare CPU computing power in grid
Next article:Design of Programmable DC Stabilized Power Supply Based on Single Chip Microcomputer

Latest Microcontroller Articles
  • Download from the Internet--ARM Getting Started Notes
    A brief introduction: From today on, the ARM notebook of the rookie is open, and it can be regarded as a place to store these notes. Why publish it? Maybe you are interested in it. In fact, the reason for these notes is ...
  • Learn ARM development(22)
    Turning off and on interrupts Interrupts are an efficient dialogue mechanism, but sometimes you don't want to interrupt the program while it is running. For example, when you are printing something, the program suddenly interrupts and another ...
  • Learn ARM development(21)
    First, declare the task pointer, because it will be used later. Task pointer volatile TASK_TCB* volatile g_pCurrentTask = NULL;volatile TASK_TCB* vol ...
  • Learn ARM development(20)
    With the previous Tick interrupt, the basic task switching conditions are ready. However, this "easterly" is also difficult to understand. Only through continuous practice can we understand it. ...
  • Learn ARM development(19)
    After many days of hard work, I finally got the interrupt working. But in order to allow RTOS to use timer interrupts, what kind of interrupts can be implemented in S3C44B0? There are two methods in S3C44B0. ...
  • Learn ARM development(14)
  • Learn ARM development(15)
  • Learn ARM development(16)
  • Learn ARM development(17)
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号