Wireless Video Surveillance System Based on ARM11+Linux

Publisher:Qilin520Latest update time:2014-03-18 Source: elecfansKeywords:ARM11 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

  1 Introduction

  With the popularization of wireless networks, the computing power of ARM processors is constantly increasing, and the technology of computer image processing is constantly improving. Video surveillance based on ARM is increasingly widely used in schools, communities, hotels, Internet cafes, medical and other fields. Traditional video surveillance systems have complex wiring, large equipment, low intelligence, and insufficient use of software and hardware resources. However, the ARM embedded system has the characteristics of miniaturization, small footprint, low cost, compact structure, and support for wireless networks, making the use of S3C6410's ARM11+linux system to form a variety of wireless network digital surveillance systems have a wide range of application value.

  2 Overall system design

  2.1 Overall hardware design

  This system uses the S3C6410 with ARM11 core of Samsung Company of South Korea as the microprocessor. This processor is small in size, with the size of only a 48mm*67mm square. It integrates rich interfaces, 32-bit data bus and 32-bit external address bus, SROM controller, SRAM controller, NAND flash controller, 64 interrupt source interrupt controller, five 32-bit timers, four UARTs, four DMA controllers, STN and TFT LCD controller, watchdog, IIS audio interface, IIC-Bus interface, two USB host ports, one USB device port, two serial peripheral interface circuits, three SD card interfaces, camera_if interface, TV_out interface, MFC interface, 2-way SPI, Touch Screen interface. Its main frequency can reach 800MHz, and the maximum frequency of the expansion bus is 133MHz. On this basis, relevant extensions have been made to introduce a four-wire RS-232 serial port, which is used for the development host to communicate with the S3C6410 development platform; 1GB NAND flash is used to store the embedded Linux operating system, applications and data, and 128MB of DDR memory is used to store running programs and data captured by the camera; a WIFI module is extended to develop video data transmission between the platform and the server, and realize video remote monitoring through the wireless network.

  2.2 Overall software design

  The overall software structure includes the boot loader Bootloader, operating system kernel, device driver and application layer program, and its software structure is shown in Figure 1.

  Figure 1 Overall software structure diagram

  Figure 1 Overall software structure diagram

  After the system is powered on, the boot loader is run first. The function of this program is to initialize the hardware devices, establish a mapping table of the memory space, boot and load the operating system kernel, and then start the embedded operating system Linux, and then load some necessary drivers such as Nand flash driver, LCD driver, WIFI driver, etc.

  3 Video Data Acquisition and Coding Design

  3.1 Design of video data acquisition based on V4L2

  In Linux system, various operations on video devices are realized through Video4Linux2, referred to as V4L2. Applications realize the operation of video devices through the interface functions provided by V4L2. The whole process of video data acquisition is shown in Figure 2.

  (1) Open the video device, int open (const char *pathname, int flags). If the return value of this function is -1, it means the opening failed. Otherwise, it indicates the file descriptor of the opened device.

  (2) Obtain device information. Use the ioctl (cam_fp, VIDIOC_QUERYCAP, &cap) function to obtain the device file attribute parameters and store them in the cap structure, where cam_fp refers to the file descriptor of the opened video device.

  (3) Select the video input mode. Set the input mode of the video device through the ioctl (cam_fp, VIDIOC_ S_INPUT, &chan) function, where the data structure type of chan is v4l2_input, which is used to specify the video input mode.

  (4) Set the video frame format. Use the ioctl (cam_fp, VIDIOC_S _FMT, &fmt) function to set the video frame format, where the data structure type of fmt is v4l2_format, which is used to specify the width, height, pixel size, etc. of the video.

  (5) Read video data. Use the read(cam_fp, g_yuv, YUV_SIZE) function to store the camera's frame data in g_yuv, where YUV_SIZE refers to the size of each frame of data.

  (6) Close the video device. Use the close(cam_fp) function to close the video device.

  Figure 2: Video data acquisition process flow chart.

  Figure 2: Video data acquisition process flow chart.

  3.2 H264 encoding of video data

  In order to improve the encoding speed of video data, this system adopts H264 hard encoding. Hard encoding has the advantages of not occupying CPU resources and fast computing speed, thus meeting the real-time requirements of video data.

  The specific encoding process is shown in Figure 3.

  (1) Create H264 encoding structure. This is done by calling the SsbSipH264EncodeInit (width, height, frame_rate, bitrate, gop_num) function, where width represents the width of the image, height represents the height of the image, frame_rate represents the frame rate, bitrate represents the bit rate or bit rate, and gop_num represents the maximum number of frames (B or P frames) between two key frames.

  (2) Initialize the H264 encoding structure and call the SsbSipH264Encode Exe (handle) function.

  (3) Get the video input address, which is implemented by the SsbSipH264EncodeGetInBuf (handle, 0) function. This function returns the first address of the video input and stores it in p_inbuf. [page]

  (4) Input video data and call memcpy(p_inbuf, yuv_buf, frame_size) function. p_inbuf stores the data to be encoded, yuv_buf stores the original video data, and frame_size indicates the size of the data.

  (5) Encode the video data and perform H264 encoding on the content of p_inbuf. Call the SsbSipH264EncodeExe (handle) function to implement it.

  (6) Output the encoded data, SsbSipH264EncodeGetOutBuf (handle, size). This function returns the starting address of the encoded image, and size indicates the size of the encoded image.

  (7) Close the hard-coded device and call the SsbSipH264EncodeDeInit (handle) function.

  Figure 3 H264 encoding process diagram

  Figure 3 H264 encoding process block diagram.

  4 Transmission and display of video data

  4.1 Video Data Transmission Module Design

  Modern wireless communication network standards mainly include 3G (third generation mobile communication), WI-FI, Bluetooth, Zigbee, etc. See Table 1 for details.

  Table 1 Basic comparison of commonly used wireless communication network standards

  

  Since WI-FI has the advantages of high transmission rate, multiple supported protocols, simple installation and setting, and low cost, the wireless network standard used in this system is WI-FI.

  4.1.1 WI-FI wireless network construction process

  (1) Load the WI-FI module. Use the insmod command to load it. Here you need to load two files: helper_sd.bin and sd8686.bin. These two files can be downloaded from the Marvel official website.

  (2) Search for the WI-FI network. First, use the ifconfig eth1 up command to turn on the WI-FI network interface card, and then use the iwlist eth1 scanning command to search for the WIFI network.

  (3) Set the IP address and subnet mask of eth1.

  (4) Set ESSID. This is done through the iwconfig eth1 essid 402 command. ESSID is used to distinguish different networks.

  (5) Set the password. This is done through the command iwconfig eth1 key s:your_key, where your_key is the login password.

  4.1.2 Video Data Transmission Based on RTP Protocol

  RTP is the abbreviation of Real-time Transport Protocol, which represents a network transmission protocol and is a commonly used protocol for audio and video uploading [5]. RTCP and RTP together provide flow control and congestion control services. They can optimize transmission efficiency with effective feedback and minimal overhead, and are therefore particularly suitable for transmitting real-time data. Therefore, this protocol is used to transmit video data.

  This system uses the RTP protocol stack provided by the open source code Jrtplib. Since Jrtplib encapsulates the implementation of RFC3550, it makes it easier to transmit video data. Since the maximum network payload of this system is 1500 bytes, the upper limit of the RTP packet size is set to 1400 bytes. If the data to be sent is larger than 1400 bytes, the unpacking method is used before sending. The specific transmission process is shown in Figures 4 and 5.

  Figure 4 Sending end flow chart

  Figure 4: Sending end flow chart.

  Figure 5 Flow chart of the receiving end

  Figure 5: Receiver flow chart.

  The main process of the sender is as follows:

  (1) Create an RTP session and set the destination address. Call the Create method to get the RTP session instance, and then call the AddDestination method to set the destination IP and destination port number. [page]

  (2) To obtain data, call the Get_Data() function.

  (3) Sending data is achieved through the SendPacket() method.

  The main process of the receiving end is as follows:

  (1) Create an RTP session. Call the Create method to create a session instance and set the port number when creating the session. The port number should be consistent with the port number of the sender.

  (2) Receive RTP data. Call the PollData() method of the RTPSession class to receive data.

  (3) Save the RTP datagram. By creating a pointer array, which stores the pointers to the RTP datagram, you only need to assign the pointer of the RTP datagram just received to this pointer array, which can save the time of data copying.

  (4) Determine whether the reception is complete. If not, jump to step b. Otherwise, the receiving program exits.

  4.2 Decoding and displaying video data

  Since the received data is H264-encoded data, it must be decoded before it can be displayed. On the server side, FFmpeg is used to decode the video data. FFmpeg is an open source, free, cross-platform video and audio streaming solution and is free software.

  Decoding mainly involves the libavcodec library, libswscale library and libavformat library under FFmpeg. The first library is a library that contains all FFmpeg audio and video codecs. The second library is a format conversion library. Because the decoded data is in YUV420 format, and to display the data on a computer, RGB format is required. The function of this library is to convert YUV420 format into RGB format. The third library is a library that contains all common audio and video format parsers and generators.

  4.2.1 Initialize the decoding thread

  (1) Register all file formats and codecs and call av_register_all() to complete the registration.

  (2) Set the AVFormatContext structure. This structure is the main structure for implementing input and output functions and storing related data during FFmpeg format conversion. This structure is set through the av_open_input_file function.

  (3) Check the video stream information by calling the av_find_stream_info(pFormatCtx) function. pFormatCtx->streams is filled with the correct video stream information. The pFormatCtx type is AVFormatContext.

  (4) Get the codec context, pCodecCtx = pFormatCtx -》 streams[videoStream] -》 codec, the pCodecCtx pointer points to all the information about the codec used in the stream.

  (5) Open the decoder. First find the corresponding decoder through the avcodec_find_decoder function, and then call the avcodec_open function to open the decoder.

  (6) Apply for memory to store decoded data by calling avcodec_alloc_frame function. Since the decoded data is in YUV420 format, it is also necessary to convert the data into RGB format. Therefore, call avcodec_alloc_frame again to apply for memory to store RGB format data.

  (7) Apply for memory to store the original data. When H264 is decoded, the P frame needs to refer to the previous key frame or P frame, and the B frame needs to refer to the previous and next frames. Therefore, the original data needs to be stored. First, use avpicture_get_size to obtain the required size, and then call the av_malloc function to apply for memory space.

  (8) Combine the frame and the newly allocated memory by calling the avpicture_fill function.

  (9) Create a format conversion context, which is implemented by the method img_convert_ctx = sws _getContext (src_w, src_h, src_pix_fmt, dst_w, dst_h, PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL, NULL). Among them, src_w represents the width of the source image, src_h represents the height of the source image, src_pix_fmt represents the format of the source image, dst_w represents the width of the destination image, dst_h represents the height of the destination image, and PIX_FMT_RGB24 represents the format of the destination image.

  4.2.2 Decode the data in H264 format

  (1) Obtain a frame of data to be decoded. Since the previous receiving thread has stored the received data in a pointer array, the decoding thread only needs to obtain the data from the pointer data.

  (2) Decode data. Call the decoding function avcodec_decode_video (pCodecCtx, pFrame, &finished, encodedData, size) to decode the video file. The parameter pCodecCtx is the pointer to the video stream encoding context obtained earlier; the parameter pFrame stores the location of the decoded picture, and the parameter finished is used to record the number of completed frames; the parameter encodedData is the input buffer pointer, which points to the original data to be decoded; the parameter size is the size of the input buffer.

  (3) Convert the decoded video data YUV420 format into RGB format by calling the sws_scale() function.

  4.2.3 Video Data Display

  This system uses QImage under QT to display video data. Since QImage can access single pixels, when displaying the previous frame of the image, the image is saved. When displaying the next frame of the image, if the pixel value is the same as the previous frame, there is no need to modify the value, thus saving a lot of time. That is, modify what has changed. The specific steps of the display process are as follows:

  (1) Obtain the decoded video data in RGB format.

  (2) Loop to obtain the R component, G component, and B component of the video data.

  (3) Determine whether the pixel value of this point is the same as the pixel value of the corresponding position in the previous frame. If so, jump to step 2; otherwise, save the pixel value.

  (4) For each of the obtained RGB components, construct the color value of the pixel by calling the qRGB (R, G, B) construction method.

  (5) To set the pixel value of the corresponding point, first generate an object of the QImage class, and then call the setPixel(x, y, rgb) of the class. Among them, x is the x coordinate value of the image, y is the y coordinate value of the image, and rgb is the color value of the point.

  (6) Display the image by calling the update() method, which will trigger a drawing event. Therefore, in the drawing event, write the image display code to display the newly generated QImage object and draw the image by calling the drawImage() method.

  5 Conclusion

  In order to reduce the amount of data, this system adopts the YUV420 sampling format when collecting video images. The video data encoding adopts the H264 hard coding method, which greatly improves the encoding speed. When transmitting on the wireless network, considering the packet loss problem, the encoded data is unpacked and then sent, which reduces the packet loss rate. After testing, this system collects an image with a resolution of 320X240 taken by the OV9650 camera. After H264 hard coding, the encoded image data is about 5KB, which reduces the data transmission volume, and the hard coding can encode 25 frames of image data per second, meeting the requirements of real-time video data encoding. The transmission rate of the WI-FI wireless network is generally around 11-54Mbps, so this wireless network can meet the needs of real-time video transmission. This system has built a digital wireless video surveillance platform with high real-time performance, low cost and low power consumption. Based on this platform, various applications can be built, such as real-time monitoring of road conditions, face recognition, warehouse alarm and other applications. The system has certain practical value.

Keywords:ARM11 Reference address:Wireless Video Surveillance System Based on ARM11+Linux

Previous article:Design of smart home system based on Linux/Qt
Next article:Application of Dual-Port RAM in ARM and DSP Communication System

Recommended ReadingLatest update time:2024-11-16 20:44

How is ARM-Linux development different from MCU development?
The development of ARM-Linux programs is mainly divided into three categories: application development, driver development, and system kernel development. Different types of software development have different characteristics. Today we will look at the differences between ARM-Linux development and MCU development, as
[Microcontroller]
ARM-Linux cross-compilation tool under Linux platform
The following is the compilation of ARM under Ubuntu platform: 1. Disassembly tools arm-linux-objdump -D -S hello log //View the assembly code of hello 2. ELF file viewing tool arm-linux-readelf -a hello log //View the hello file arm-linux-readelf -d hello log //View the dynamic library used by hello 3. De
[Microcontroller]
Linux 2.6.32.2 mini2440 platform transplantation-kernel transplantation, yaffs2 file system transplantation
1.1 Obtain the Linux kernel source code There are many ways to obtain the Linux kernel source code. If your Linux platform can access the Internet, you can directly enter the following command on the command line to obtain Linux-2.6.32.2: #wget http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.32.2.tar.gz Of
[Microcontroller]
OK6410A development board (eight) 58 linux-5.11 OK6410A fiq abnormal
arch/arm/kernel/entry-armv.S 1183 .L__vectors_start: ... 1191     W(b)    vector_fiq 732     .align  5                                                                     733 __fiq_usr:                                                                        734     usr_entry trace=0                       
[Microcontroller]
The difference between arm-linux-gnueabi and arm-linux-gnueabihf
1. What are ABI and EABI 1. ABI ABI (Application Binary Interface (ABI) for the ARM Architecture) describes the low-level interface between an application (or other type) and the operating system or other applications in a computer.     ABI covers various details, such as:     1 the size, layout and alignment of data
[Microcontroller]
Design and analysis of embedded access control monitoring system based on ARM9 and Linux
  Access control, also known as access control system, is a digital management system for managing people's access. At present, with the development of industrial automation and people's increasing demand for applications, access control monitoring systems have been used more and more, but traditional access control m
[Microcontroller]
Design and analysis of embedded access control monitoring system based on ARM9 and Linux
Establish ARM cross-compilation environment (arm-none-linux-gnueabi-gcc with EABI)
Yesterday, I finally managed to successfully run the cross-compilation environment, transplant the kernel, and create the root file system on the arm development board. Some steps went smoothly, but more often I was troubled by many problems. For example, the last inconspicuous problem caused the file system to fail t
[Microcontroller]
User stack and kernel stack in ARM Linux system
On Linux systems, a process has two different stacks, one is the user stack and the other is the kernel stack. User stack The user stack is the stack used directly by the application. As shown in the figure below, it is located at the top of the application's user process space. When the user program calls functio
[Microcontroller]
User stack and kernel stack in ARM Linux system
Latest Microcontroller Articles
  • Download from the Internet--ARM Getting Started Notes
    A brief introduction: From today on, the ARM notebook of the rookie is open, and it can be regarded as a place to store these notes. Why publish it? Maybe you are interested in it. In fact, the reason for these notes is ...
  • Learn ARM development(22)
    Turning off and on interrupts Interrupts are an efficient dialogue mechanism, but sometimes you don't want to interrupt the program while it is running. For example, when you are printing something, the program suddenly interrupts and another ...
  • Learn ARM development(21)
    First, declare the task pointer, because it will be used later. Task pointer volatile TASK_TCB* volatile g_pCurrentTask = NULL;volatile TASK_TCB* vol ...
  • Learn ARM development(20)
    With the previous Tick interrupt, the basic task switching conditions are ready. However, this "easterly" is also difficult to understand. Only through continuous practice can we understand it. ...
  • Learn ARM development(19)
    After many days of hard work, I finally got the interrupt working. But in order to allow RTOS to use timer interrupts, what kind of interrupts can be implemented in S3C44B0? There are two methods in S3C44B0. ...
  • Learn ARM development(14)
  • Learn ARM development(15)
  • Learn ARM development(16)
  • Learn ARM development(17)
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号