1 Introduction
With the popularization of wireless networks, the computing power of ARM processors is constantly increasing, and the technology of computer image processing is constantly improving. Video surveillance based on ARM is increasingly widely used in schools, communities, hotels, Internet cafes, medical and other fields. Traditional video surveillance systems have complex wiring, large equipment, low intelligence, and insufficient use of software and hardware resources. However, the ARM embedded system has the characteristics of miniaturization, small footprint, low cost, compact structure, and support for wireless networks, making the use of S3C6410's ARM11+linux system to form a variety of wireless network digital surveillance systems have a wide range of application value.
2 Overall system design
2.1 Overall hardware design
This system uses the S3C6410 with ARM11 core of Samsung Company of South Korea as the microprocessor. This processor is small in size, with the size of only a 48mm*67mm square. It also integrates a wealth of interfaces, 32-bit data bus and 32-bit external address bus, SROM controller, SRAM controller, NAND flash controller, 64 interrupt source interrupt controller, five 32-bit timers, four UARTs, four DMA controllers, STN and TFT LCD controllers, watchdog, IIS audio interface, IIC-Bus interface, two USB host ports, one USB device port, two serial peripheral interface circuits, three SD card interfaces, camera_if interface, TV_out interface, MFC interface, 2-way SPI, Touch Screen interface. Its main frequency can reach 800MHz, and the maximum frequency of the expansion bus is 133MHz. On this basis, relevant extensions have been made to introduce a four-wire RS-232 serial port, which is used for the development host to communicate with the S3C6410 development platform; it is configured with 1GB NAND flash is used to store the embedded Linux operating system, applications and data, and 128MB of DDR memory is used to store running programs and data captured by the camera; a WIFI module is extended to develop video data transmission between the platform and the server, and realize video remote monitoring through the wireless network.
2.2 Overall software design
The overall software structure includes the boot loader Bootloader, operating system kernel, device driver and application layer program, and its software structure is shown in Figure 1.
Figure 1 Overall software structure diagram
After the system is powered on, the boot loader is run first. The function of this program is to initialize the hardware devices, establish a mapping table of the memory space, boot and load the operating system kernel, and then start the embedded operating system Linux, and then load some necessary drivers such as Nand flash driver, LCD driver, WIFI driver, etc.
3 Video Data Acquisition and Coding Design
3.1 Design of video data acquisition based on V4L2
In Linux system, various operations on video devices are realized through Video4Linux2, referred to as V4L2. Applications realize the operation of video devices through the interface functions provided by V4L2. The whole process of video data acquisition is shown in Figure 2.
(1) Open the video device, int open(const char *pathname, int flags). If the return value of this function is -1, it means the opening failed. Otherwise, it indicates the file descriptor of the opened device.
(2) Obtain device information. Use the ioctl(cam_fp, VIDIOC_QUERYCAP, &cap) function to obtain the device file attribute parameters and store them in the cap structure, where cam_fp refers to the file descriptor of the opened video device.
(3) Select the video input mode. Set the input mode of the video device through the ioctl(cam_fp, VIDIOC_ S_INPUT, &chan) function, where the data structure type of chan is v4l2_input, which is used to specify the video input mode.
(4) Set the video frame format. Use the ioctl(cam_fp, VIDIOC_S _FMT, &fmt) function to set the video frame format, where the data structure type of fmt is v4l2_format, which is used to specify the width, height, pixel size, etc. of the video.
(5) Read video data. Use the read(cam_fp, g_yuv, YUV_SIZE) function to store the camera's frame data in g_yuv, where YUV_SIZE refers to the size of each frame of data.
(6) Close the video device. Use the close(cam_fp) function to close the video device.
Figure 2: Video data acquisition process flow chart.
3.2 H264 encoding of video data
In order to improve the encoding speed of video data, this system adopts H264 hard encoding. Hard encoding has the advantages of not occupying CPU resources and fast computing speed, thus meeting the real-time requirements of video data.
The specific encoding process is shown in Figure 3.
(1) Create H264 encoding structure. This is done by calling the SsbSipH264EncodeInit (width, height, frame_rate, bitrate, gop_num) function, where width represents the width of the image, height represents the height of the image, frame_rate represents the frame rate, bitrate represents the bit rate or bit rate, and gop_num represents the maximum number of frames (B or P frames) between two key frames.
(2) Initialize the H264 encoding structure and call the SsbSipH264Encode Exe (handle) function.
(3) Get the video input address, which is implemented by the SsbSipH264EncodeGetInBuf (handle, 0) function. This function returns the first address of the video input and stores it in p_inbuf.
(4) Input video data and call memcpy(p_inbuf, yuv_buf, frame_size) function. p_inbuf stores the data to be encoded, yuv_buf stores the original video data, and frame_size indicates the size of the data.
(5) Encode the video data and perform H264 encoding on the content of p_inbuf. Call the SsbSipH264EncodeExe(handle) function to implement it.
(6) Output the encoded data, SsbSipH264EncodeGetOutBuf (handle, size). This function returns the starting address of the encoded image, and size indicates the size of the encoded image.
(7) Close the hard-coded device and call the SsbSipH264EncodeDeInit (handle) function.
Figure 3 H264 encoding process block diagram.
4 Transmission and display of video data
4.1 Video Data Transmission Module Design
Modern wireless communication network standards mainly include 3G (third generation mobile communication), WI-FI, Bluetooth, Zigbee, etc. See Table 1 for details.
Table 1 Basic comparison of commonly used wireless communication network standards
Since WI-FI has the advantages of high transmission rate, multiple supported protocols, simple installation and setting, and low cost, the wireless network standard used in this system is WI-FI.
4.1.1 WI-FI wireless network construction process
(1) Load the WI-FI module. Use the insmod command to load it. Here you need to load two files: helper_sd.bin and sd8686.bin. These two files can be downloaded from the Marvel official website.
(2) Search for the WI-FI network. First, use the ifconfig eth1 up command to turn on the WI-FI network interface card, and then use the iwlist eth1 scanning command to search for the WIFI network.
(3) Set the IP address and subnet mask of eth1.
(4) Set ESSID. This is done through the iwconfig eth1 essid 402 command. ESSID is used to distinguish different networks.
(5) Set the password. This is done through the command iwconfig eth1 key s:your_key, where your_key is the login password.
4.1.2 Video Data Transmission Based on RTP Protocol
RTP is the abbreviation of Real-time Transport Protocol, which represents a network transmission protocol and is a commonly used protocol for audio and video uploading [5]. RTCP and RTP together provide flow control and congestion control services. They can optimize transmission efficiency with effective feedback and minimal overhead, and are therefore particularly suitable for transmitting real-time data. Therefore, this protocol is used to transmit video data.
This system uses the RTP protocol stack provided by the open source code Jrtplib. Since Jrtplib encapsulates the implementation of RFC3550, it makes it easier to transmit video data. Since the maximum network payload of this system is 1500 bytes, the upper limit of the RTP packet size is set to 1400 bytes. If the data to be sent is larger than 1400 bytes, the unpacking method is used before sending. The specific transmission process is shown in Figures 4 and 5.
Figure 4: Sending end flow chart.
Figure 5: Receiver flow chart.
The main process of the sender is as follows:
(1) Create an RTP session and set the destination address. Call the Create method to get the RTP session instance, and then call the AddDestination method to set the destination IP and destination port number.
(2) To obtain data, call the Get_Data() function.
(3) Sending data is achieved through the SendPacket() method.
The main process of the receiving end is as follows:
(1) Create an RTP session. Call the Create method to create a session instance and set the port number when creating the session. The port number must be consistent with the port number of the sender.
(2) Receive RTP data. Call the PollData() method of the RTPSession class to receive data.
(3) Save the RTP datagram. By creating a pointer array, which stores the pointers of the RTP datagram, we only need to assign the pointer of the RTP datagram just received to this pointer array, which can save the time of data copying.
(4) Determine whether the reception is complete. If not, jump to step b, otherwise the receiving program exits.
4.2 Decoding and displaying video data
Since the received data is H264-encoded data, it must be decoded before it can be displayed. On the server side, FFmpeg is used to decode the video data. FFmpeg is an open source, free, cross-platform video and audio streaming solution and is free software.
Decoding mainly involves the libavcodec library, libswscale library and libavformat library under FFmpeg. The first library is a library that contains all FFmpeg audio and video codecs. The second library is a format conversion library. Because the decoded data is in YUV420 format, and to display the data on a computer, RGB format is required. The function of this library is to convert YUV420 format into RGB format. The third library is a library that contains all common audio and video format parsers and generators.
Previous article:Design of a compact image acquisition system based on ARM
Next article:Development of logistics tracking system based on RFID, GPS and GPRS
Recommended ReadingLatest update time:2024-11-17 03:29
Professor at Beihang University, dedicated to promoting microcontrollers and embedded systems for over 20 years.
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- Rambus Launches Industry's First HBM 4 Controller IP: What Are the Technical Details Behind It?
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- Common power symbols and their meanings
- EEWORLD University - Teach you how to learn LittleVGL
- Altera SoC Architecture Excerpt - Altera SoC FPGA Adaptive Debug.pdf
- 5. Common Emitter Amplifier Circuit
- CPLD technology and its application.pdf
- FPGA Design and Implementation of HDLC Control Protocol.pdf
- 【Silicon Labs Development Kit Review 03】+IO/Uart Usage
- Even if we give the complete set of drawings, Chinese people still cannot build high-end lithography machines?
- System Verilog 1800-2012 Syntax Manual
- aos multithreading and mutex lock