Digital video surveillance system is centered on computer or embedded system and based on video processing technology. It is in line with the international standard of image data compression. It is a new type of monitoring system that comprehensively utilizes technologies such as image sensors, computer networks, automatic control and artificial intelligence. Since the digital video surveillance system digitizes the video images, digital monitoring has many advantages compared with the traditional analog monitoring system. The digital video system can make full use of the fast processing power of the computer to compress, analyze, store and display them. Digital video processing technology improves the image quality and monitoring efficiency, making the system easy to manage and maintain. The whole system is a modular structure with a small size and is easy to install, use and maintain. It is precisely because digital video surveillance technology has advantages that traditional analog monitoring technology cannot match, and it conforms to the development trend of digitization, networking and intelligence in the current information society, so digital video surveillance technology is gradually replacing analog monitoring technology and is widely used in all walks of life. Embedded systems have been widely used in various fields of society with their small size, strong real-time performance, high cost performance and good stability. An embedded system designed by the author realizes real-time monitoring of the site with WinCE operating system and ARM hardware platform as the core, and transmits video images to the host through a wireless network to realize functions such as analysis, storage and display.
1 System Design
This system mainly consists of three parts: operating system customization, video image acquisition, and wireless transmission of video images. The core chip of the system uses the S3C2410 embedded microprocessor based on the ARM920T core, and the software environment uses the Microsoft Windows CE operating system. The system first collects real-time video information from the scene through a USB camera and compresses it. Then, two wireless network cards are used to build a wireless local area network between the ARM development board and the host computer, so that the compressed video data can be transmitted to the host side, and the terminal user can view the remote video image through the streaming media player program on the host side.
The overall structure diagram of the video surveillance system is shown in Figure 1.
2 Customization of operating system
The core chip of the system hardware platform uses the S3C2410 processor, with a maximum frequency of 203 MHz. The S3C2410 processor is a 32-bit microcontroller based on the ARM920T processor core of ARM, manufactured by Samsung using a 0.18μm manufacturing process. The processor has a high degree of integration, simplifies the hardware design of the application system, and improves the reliability of the system. The development board also expands 4MB of NOR Flash, 64MB of NAND Flash, and 64MB of DRAM.
The system uses the Microsoft Willdows CE (abbreviated as "WinCE") operating system. WinCE is a compact, efficient and scalable 32-bit operating system suitable for a variety of embedded systems and products. It has a multi-threaded, multi-tasking, deterministic real-time, fully preemptive priority operating system environment, specifically for hardware systems with limited resources; at the same time, its modular design allows system developers and application developers to customize it for a variety of products, and can select, combine and configure WinCE modules and components to create a user version of the operating system.
In WinCE product development, there are two very important tasks: kernel customization and application development. Microsoft provides good development tools in both aspects, namely, the kernel customization tool Platform Builder (PB) and the application development tool Embedded Visual C++ (EVC).
In the system customization process, the relationship between the various parts is shown in Figure 2.
[page]
3 Video Image Acquisition
3.1 Camera Driver
The hardware resources of the image acquisition module use the most widely used USB interface Vimicro camera on the market. This camera is low-cost and has good imaging effect. It is used in this system with high cost performance. Before the system performs video acquisition, it must first detect and set the video source. After the system starts, the WinCE operating system will automatically detect whether the camera is connected. When customizing the WinCE operating system, this system modifies the operating system configuration and registry. The system can automatically load the camera driver ZC030x.dll under WinCE.
When the system automatically loads the driver, first copy the driver to the \WINDOWS folder, and then write the camera driver information to the registry:
Among them, prefix is the device file name, D11 is the driver file name, and Order is the device file name index. After the hardware configuration is completed, the operating system is started, and the driver can be automatically loaded and the application can be run for image acquisition.
3.2 Image acquisition program
The Vimicro camera uses the Vimicro 301PLUS fast master chip. This chip is a high-performance image compression chip that outputs MIPEG video stream data. MIPEG (Motion JPEG) is mainly a technology developed based on static video compression. Its characteristics are that it basically does not consider the changes between different frames in the video stream, and only compresses a certain frame separately, usually achieving a compression ratio of 6:1. It has very good error stability and can obtain high-definition video images. It can also flexibly set the clarity of each video channel and the number of compressed frames.
This system directly obtains MJPEG video stream data from the camera driver. The image acquisition process is shown in Figure 3.
The main functions used in the image acquisition module are:
capInitCamera() is used to initialize the video device and obtain the number of currently available video devices.
capSetVideoFormat() sets the video format and resolution. The video format used in this system is RGB24, and the resolution is 320×240 pixels.
capGrabFrsme() grabs one frame of image from the driver and stores it in the cache lpFrameBuffer.
capGetLastJpeg() converts the captured MJPEG image into JPEG format and sends it to the wireless transmission module.
capCloseCamera() closes the video device.
The video acquisition part also has related functions such as querying the video acquisition format, setting brightness, setting contrast, etc., which will not be described in detail.
4 Video transmission part
4.1 Configure wireless network card
The image transmission module is mainly realized through the wireless network card of the USB interface. The wireless network card can be directly connected to the USB host interface integrated in S3C2410, works in the 2.4 GHz ISM band, adopts direct sequence spread spectrum communication, complies with the 802.11g protocol, and has a transmission speed of up to 54Mbps. The effective indoor distance is 100 m, which can meet the requirements of video transmission in the local area network. This system builds a wireless local area network through the wireless network card between the development board and the host, which can achieve point-to-point seamless connection. Users can use this wireless network to achieve file transfer, video communication and other applications. The
wireless network card on the development board also needs to load the driver to run. When customizing the WinCE operating system, this system first copies the driver of the wireless network card to the \WINDOWS folder, and then writes the driver information of the wireless network card to the registry. After the WinCE operating system is started, it will automatically detect whether the wireless network card is connected and load the driver. At this time, the wireless network card can be called through the application. When transmitting wirelessly, be sure to set the development board and the host in the same IP network segment.
4.2 Transmitting video data
Real-time Transport Protocol (RTP) is a real-time streaming protocol that can ensure that the bandwidth of the media signal matches the current network conditions and realizes real-time transmission of streaming media data in a one-to-one (umcast) or one-to-many (multicast) network environment. RTP usually uses UDP to transmit multimedia data. The entire RTP protocol consists of two closely related parts: the RTP data protocol and the RTCP control protocol.
In response to the requirements for sending and receiving system data, the RTP protocol stack provided by the open source code JRT-PLIB is used. JRTPLIB is an object-oriented RTP library that fully complies with the RFC1889 design. Developers can develop high-quality audio/video transmission programs as long as they have a preliminary understanding of the RTP protocol. Port it to EVC and make slight modifications to it, and it can be applied to the ARM development board of the WinCE operating system.
4.2.1 Establishing RTP data transmission and reception
The RTPSession class provided by JRTPLIB can implement RTP data transmission and reception. The initialization steps for sending and receiving RTP data are basically the same. First, set the session parameters such as timestamp, maximum RTP packet size, data timeout, and then set the base port (note that this port must be an even number, the default value is 5000). After setting these parameters, you can use the Create method of RTPSession to establish RTP data reception and transmission.
The data sending end uses RTPIPv4Address(intIP, PORT_DATA) to create a new address, and then uses AddDestination to add this address to the address list to be sent. Then, call the SendPacket function to send RTP data.
rtpSessiorl.SendPacket(sendBuf, length, 26, true, 0UL);
After establishing an RTP session on the data receiving end and adding the target address, you can receive data from the target address. At this time, the OnRtpPacket function will be called to receive the RTP data, and the loading data will be obtained and used in this function. In the OnRTPPacket function, call the function zc030x_OutPicture(&m_pDlg->frame) to decode the received data. Finally, call the function StretchDIBits() to display the video image in the current window.
void RTPAppSession::OnRTPPacket (RTPPacket *
pack, const RTPTime & receivetime, const RTPAddress
*senderaddress)
4.2.2 RTP control protocol - RTCP
RTCP (Real-time Transport Control Protocol) and RTP provide flow control and congestion control services together. During the RTP session, each participant periodically transmits RTCP packets. The RTCP packet contains statistical information such as the number of packets sent and the number of packets lost. Therefore, the server can use this information to dynamically change the transmission rate or even change the payload type. RTP and RTCP are used together to achieve the best transmission efficiency with effective feedback and minimal overhead.
JRTPLIB is a highly encapsulated RTP library. After calling the PollData() or SendPacket() method, JRTPLIB can automatically process the incoming RTCP datagrams and send RTCP datagrams when necessary, thereby ensuring the correctness of the entire RTP session process. When sending and receiving RTP data, you can also select the control information that needs to be sent for the current RTP session and set the control information by calling the methods provided by the RTPSession class.
Conclusion
This system is based on the S3C2410 platform and WinCE operating system. It collects real-time video information on site through a USB camera and compresses it. Then, two wireless network cards are used to build a wireless local area network between the development board and the host computer, and the wireless transmission of video data is realized using real-time streaming transmission. The whole system is stable and reliable, easy to install, and low-cost. It can be used in many fields such as remote monitoring, industrial control, video conferencing, and visual life.
Previous article:Multi-physiological parameter embedded monitoring system based on ARM design
Next article:Application of Micro Boot Loader Based on PowerPC in Linux
Recommended ReadingLatest update time:2024-11-16 15:38
Professor at Beihang University, dedicated to promoting microcontrollers and embedded systems for over 20 years.
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- 【Repost】Thirty Practical Experiences in Switching Power Supply Design (Part 1)
- Good book recommendation! A brief review of Luo's "Basics of Power Supply Design"
- Please give me some advice, is this an amplifier impedance issue?
- Will the wave of Internet layoffs in 2019 have an impact on salary negotiations when changing jobs?
- Which pins should be connected when adding the Bluetooth module to the IMX6?
- [Qinheng Trial] Three CH549 uses pwm to adjust the brightness of the lamp
- About C language conditional compilation
- 【RT-Thread Reading Notes】Reflections on RT-Thread Chapter 9
- 100% gift for a limited time: Download and share Keysight millimeter wave radar data to win a gift
- About GUI_Init stuck when emwin is ported to stm32f2