introduction
At present, video surveillance systems have been widely used in various fields, and digitization and networking have become the development direction of video surveillance systems. This design uses the OMAP5912 processor [1] to design and implement a remote video surveillance system based on the B/S mode, which not only solves the shortcomings of traditional analog video surveillance systems, but also makes up for the shortcomings of single-core processors in video encoding.
OMAP5912 is a dual-core processor composed of ARM926EJ-S MPU core and TMS320C55 x DSP core. ARM926 can meet the processing needs of control and interface, and supports a wide range of operating systems, while C55x series DSP can provide support for real-time multimedia processing of low-power applications. Therefore, when OMAP5912 is used in video surveillance systems, the ARM core can be used to realize human-machine interface, control and communication, and the DSP core can be used to realize video encoding, thus forming a high-speed, clear, low-power video surveillance system with good human-machine interaction.
Overall system structure
The overall structure of the system is shown in Figure 1. The system uses the OMAP5912 circuit board and camera as hardware, the Montavist Linux operating system, camera driver, H.264 encoder, and network communication program as software, and a PC with IE browser as the monitoring end. On the server side, the ARM core of OMAP5912 starts the camera for video acquisition through the driver, and uses DSP/BIOS Bridge to transmit the acquired video to the DSP core. The DSP core encodes the video using the optimized H.264 encoder, and then sends the encoded video back to the ARM core. The ARM core exchanges data with the monitoring end through the network communication program. The user monitoring end decodes and plays the video, and can control the camera and set parameters through the IE browser.
System hardware design
The system hardware design is the design of the OMAP5912 circuit board. In the design, the power management chip uses TPS65010; DDR uses K4X56163PE chip; NOR FLASH uses two MT28F128J3FS-12 chips; the audio CODEC chip uses TLV320AIC23 ; the Ethernet interface chip uses LAN91C96; in addition, USB interface, UART interface, audio input and output interface, JATG/Multi-ICE simulation debugging interface and four expansion interfaces are designed. The principle block diagram of the OMAP5912 circuit board is shown in Figure 2.
Figure 2 OMAP5912 circuit board schematic
System software design
The function of the system is to collect video and transmit it remotely. The software design mainly includes the construction of the software platform, the implementation of the camera driver, the implementation of video collection and video encoding, the construction of the embedded WEB server, and the implementation of video network transmission.
1) Establishment of development platform
Before developing an application, you first need to build a software platform in the OMAP5912 circuit board. The main steps are as follows:
(1) Install MontaVista Linux embedded operating system on PC.
(2) Port u_boot to target board
(3) Configure Linux kernel and add the following two modules:
①Multimedia devices→<*>Video For Linux→[*]V4L information in proc filesystem;
②USB Support→USB Multimedia devices→<*>USB OV511 Camera support;
Modify some parameters, the most important of which is to modify Makefile:
ARCH:= OMAP
CROSS_COMPILE=arm_v4t_le-
Finally, execute the following command to generate kernel image file:
#make dep #Create kernel dependency
#make clean #Clear intermediate files
#make uImage #Create kernel image file (4) Download kernel image file using tftp
(5) Mount the root file system
During the development of an application, the file system on the Linux host is usually mounted via the network in NFS mode, so that there is no need to re-burn the file system image file every time there is a change. Its implementation is based on the corresponding configuration of the host and starting the NFS service. Add the following line to the /etc/exports file of the Linux host:
/home/luowei/montavista/filesys *(rw,no_root_squash,no_all_squash,sync)
and run the following command to make the settings effective:
#exportfs –a
#service nfs restart
Note: /home/luowei/montavista/filesys is the root file system on the host, which can be modified according to actual conditions.
(6) System test
Create a hello.c file in /home/luowei/montavista/filesys/home and compile it into a target board executable file hello using the following command:
/opt/montavista/previewkit/arm/v4t_le/bin/arm_v4t_le-gcc –o hello hello.c
Enter the same directory on the target board and execute ./hello. If it runs correctly, it means that the system is successfully built.
2) Video acquisition and encoding
(1) Camera driver design
The role of the driver is to map the device to a special device file. The user program can operate this device file like other files [2]. The system's camera driver includes the camera opening module Camera_Open(), the camera control module (including interrupt request, camera initialization, startup, camera register setting, DMA request and startup) and the camera closing module Camera_Release(). Then define the driver in struct file_operations for the API function call of the kernel Video4Linux. Considering that Linux comes with the driver of OV511, the design uses the mesh camera OV3000 with OV511 chip.
(2) Video acquisition
The design uses the API functions provided by the Video4Linux module [5] for video acquisition. The main functions include:
① dev?=?open(Camera_Open?,O_?RDWR); open the video capture device.
② ioctl?(dev?,?VIDIOCGCAP?,?&vid_
caps)? obtain the relevant performance of the video device.
③ ioctl?(dev?,?VIDIOCGCHAN?,?&vid_
chnl); obtain the relevant parameters of the camera channel.
④ ioctl?(dev?,?VIDIOCGFBUF?,?&vid_
buf)?; obtain the properties of the frame buffer.
⑤ ioctl?(dev?,?VIDIOCGPICT?,?&vid_pi); obtain the settings for image acquisition.
⑥ioctl?(dev?,?VIDIOCSPICT?,?&vid_pic);Set the relevant parameters of image acquisition, including color depth, palette type, brightness, contrast, etc.
⑦ioctl?(dev?,?VIDIOCSWIN?,?&vid_win);Set the viewport parameters of image acquisition.
⑧fwrite?(m_buf?,1?,230400?,p)?;Store the acquired data.
⑨ioctl?(?dev?,?VIDIOCMCAPTURE?,?
&vid_mmap);Start capturing a frame.
The design uses the memory mapping mmap()[3] method to capture video frames, that is, first use the ioctl() function to obtain the frame information of the camera storage buffer, then modify the settings in video_mmap, and then use mmap() to map the device file corresponding to the camera to the memory area to complete the video acquisition.
(3) Video encoding
The design uses
the DSP core of
OMAP5912
for video encoding, which can give full play to the dual-core advantages of OMAP5912. In the selection of encoder, considering that H.264 has a great improvement in compression performance compared with previous video coding standards (such as H.263 and MPEG-4), this design chooses the x264-20060612 version of H.264 encoder suitable for embedded systems. Considering the characteristics of the surveillance video scene, the following encoding scheme is selected:
①H.264 baseline, without B frame encoding and CABAC;
②Select 16 as the search range;
③Select 32 as the quantization parameter;
④1/2 pixel interpolation;
⑤Only use 1 reference frame;
⑥When encoding P frame macroblocks, only 16×16, 16×8, 8×16, 8×8, Intra16×16 are used.
After a series of optimizations, the H.264 encoder can be used in this system, and its workflow is shown in Figure 3.
3) Video network transmission
Considering the advantages of B/S mode such as good scalability, easy maintenance and upgrade, and high security, the system adopts B/S mode. Users only need to enter the server's IP address in the web address bar of the remote client to view the live video in real time through the browser.
The design of the system network communication program includes the design of the server and the monitoring end. The monitoring end uses a general IE browser. The main design is the server part, including the construction of a WEB server [4] (mainly involving the transplantation and configuration of BOA Web Server, the creation of CGI scripts), the implementation of CGI (common gateway interface) in C language, the implementation of embedded databases and the creation of simple web pages. Among them, CGI is the interface between the WEB server and the application program, such as setting parameters of the remote device through the CGI program; the embedded database MSQL is used to access important information of the system, such as user accounts, passwords, camera parameters, etc. After adopting the B/S mode, the communication program structure diagram of the server and the monitoring end is shown in Figure 4.
Figure 4 Communication program structure diagram between server and monitoring end
The embedded WEB server program is as follows.
//Create a TCP socket to connect to the TCP network
if((sock_fd=socket(AF_INET,SOCK_STREAM,0))==-1)
{
perror("sock_fd error");
exit(1);
}
setsockopt(sock_fd,SOL_SOCKET,SO_REUSEADDR,&on,sizeof(on));
//Assign HTTP protocol address to the socket
my_addr.sin_family=AF_INET;
ddr.sin_port=htons(80);
my_addr.sin_addr.s_addr=htons(INADDR_ANY);
if(bind(sock_fd,(struct sockaddr*)&my_addr,sizeof(their_addr))==-1)
{
perror("bind errorn");
exit(1);
}
……
if(!fork())
{
recv(new_fd,http_rec,2048,0);//Receive user control command
//Capture imageimage
=videograb(320,240,brightness,contrast,colour,hue);
...
}
The system gives full play to
the dual-core advantage of
OMAP5912
and realizes real-time video acquisition, storage, encoding and network transmission. The video surveillance user interface effect diagram is shown in Figure 5.
Conclusion
The remote video monitoring system based on OMAP5912 is designed and implemented. The system gives full play to the dual-core advantage of OMAP5912, and realizes the real-time video acquisition, storage, encoding and network transmission on the server side. The PC on the monitoring side can view the video of the monitoring point through the IE browser, and can also set the video resolution, brightness, contrast and other parameters. The actual operation shows that the system runs stably and the video is smooth, which can meet the requirements of remote video monitoring.
Previous article:Application of LMS Adaptive Filter in Shock Wave Targeting System
Next article:A boost PFM controlled DC/DC converter
- Popular Resources
- Popular amplifiers
- Molex leverages SAP solutions to drive smart supply chain collaboration
- Pickering Launches New Future-Proof PXIe Single-Slot Controller for High-Performance Test and Measurement Applications
- CGD and Qorvo to jointly revolutionize motor control solutions
- Advanced gameplay, Harting takes your PCB board connection to a new level!
- Nidec Intelligent Motion is the first to launch an electric clutch ECU for two-wheeled vehicles
- Bosch and Tsinghua University renew cooperation agreement on artificial intelligence research to jointly promote the development of artificial intelligence in the industrial field
- GigaDevice unveils new MCU products, deeply unlocking industrial application scenarios with diversified products and solutions
- Advantech: Investing in Edge AI Innovation to Drive an Intelligent Future
- CGD and QORVO will revolutionize motor control solutions
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- EEWORLD University Hall----Start using UCD3138 digital power controller tool
- CircuitPython 5.3.0-rc.0 released
- TI DSP programming - TMS320C6416
- What is the reason for the inductance noise in the peripheral circuit of the lithium battery charging chip?
- CCS Error
- FPGA_100 Days Journey_USB2.0 Design
- Qorvo is such a strong company, it’s more than just RF!
- [GD32L233C-START Review] Getting to know the development board
- Packaging form and packaging process of RFID electronic tags
- Robot? Toy? Toy robot?