introduction
Embedded systems are small and portable, which is a big advantage over PCs. With the development of computer technology, some PC-based applications can also be implemented on embedded systems. USB interface cameras are cheap, have good performance, are plug-and-play, and are easy to integrate into embedded systems because of the Video4Linux standard support for programming under Linux. Therefore, embedded system video acquisition devices usually use USB interface cameras.
1 Chip Introduction
The S3C2440 processor uses the ARM920t core, 0.13um CMOS standard macrocells and memory cells, supports high-speed bus and asynchronous bus modes, has a 1G-byte addressing space, supports external wait signals to extend the bus cycle, supports SDRAM self-refresh mode when power is off, supports booting from NAND flash memory, and uses a 4KB internal buffer for booting. It uses write-through or write-back cache operations to update the main memory; the write buffer can store 16 words of data and 4 addresses [1].
OV511 is a high-performance camera-to-USB interface single-chip control chip. It greatly simplifies the interface between a single-chip CMOS image sensor and USB. By adding 256K DRAM and a USB transceiver on the chip, it can easily form a USB-based video subsystem. The OV511 maximum video transmission design enables the system to obtain a large amount of video information in a more real-time manner [2].
OV7650 is a highly integrated and high-resolution CMOS image sensor that integrates all camera functions and matrix processing functions on-chip. Its image matrix is 640*480 pixels (300,000 pixels), supports four resolutions: VGA, QVGA, CIF, and QCIF, and can be controlled by programming [3].
2 Camera Hardware
The video acquisition part consists of OV511 and OV7650. Initialize OV7650 and OV511 through SCCB bus, set OV7650 to CIF acquisition and output YUV422 data stream; configure OV511 to input YUV422 format and output YUV420 data stream. OV511 provides the control signal required by OV7650 and accepts the same signal from OV7650.
The output signal is sent to ARM through the built-in USB controller and the external USB transceiver via the USB bus, and then compressed, encoded, and sent. The hardware block diagram is shown in Figure 1.
3 USB Camera Driver
The device driver can be seen as the interface between the Linux kernel and external devices. The device driver shields the application from the details of the hardware implementation, allowing the application to operate the external device like operating a normal file, and use the same standard system call interface functions as in the operation file to complete the opening, closing, reading, writing and I/O control operations of the hardware device. The main task of the driver is to implement these system call functions [4].
The normal operation of Linux video capture devices depends on the support of Video4Linux standards. The driver of Video4Linux devices needs to provide basic I/O operation interface functions open, read, write, and the implementation of interrupt processing, memory mapping functions, and the implementation of I/O channel control interface functions ioctl, etc., and define them in struct video_device. So first declare a video_device structure in the driver and specify the file operation function pointer array fops to register with the system. When the application issues the relevant command of file operation, the Linux kernel calls the corresponding function according to these pointers and passes the structure as a parameter to them to realize the communication between the driver and the kernel.
The Linux kernel operates device files based on device numbers. The device file corresponding to the camera in the kernel is called /dev/video. The major device number is 81, and the minor device number is determined by the number of cameras. This system only uses one camera. Therefore, there is no such device number, so you can create a node through mknod /dev/video0 c 81 0. The driver principle is shown in Figure 2. [page]
The Linux system implements USB transmission through URB. To increase the transmission speed of effective data, the URB buffer can be expanded to reduce the proportion of handshake information in each USB transaction. Each USB transmission requires the establishment, issuance, recovery and data sorting of URB in the operating system. Two URBs can be established. While waiting for one URB to be recovered, that is, when the image is being collected by the sensor, another URB is processed and initialized at the same time, and it is immediately issued after recovery. The two URBs are used alternately, which greatly improves the time efficiency of system processing.
In the compiler part, modify the relevant line in the Makefile file of the camera driver under Linux to CC=/opt/host/armv4l/bin/armv4l-unknown-gcc-linux, and modify the Config.in file so that the driver name can be seen when configuring the kernel. At the same time, modify the following processor-related parts to realize the transplantation of the S3C2440 USB driver.
(1) PCI interface processing
Since the USB host controller of S3C2440 does not include a PCI interface, the code related to the PCI interface in usb-ochi.c needs to be deleted.
(2) Register address setting
In usb-ochi.c, initialize ochi->regs with the starting address (0x49000000) of the S3C2440 USB host controller register.
(3) Host controller interrupt setting
In usb-ochi.c, the interrupt vector of the S3C2440 USB host controller register is initialized to ochi->irq.
(4) Setting the number of root HUB ports
In usb-ochi.c, define the number of downstream ports of the root HUB to be 2 (#define MAX_ROOT_PORTS
2), the default value of MAX_ROOT_PORTS is 150.
(5) Modify Makefile and Config.in files
After the modification is completed, execute the make command to generate the required driver file with the .o suffix.
After the driver is designed and compiled successfully, it is added to the kernel using the dynamic loading method. First, cross-compile the driver module on the host machine, then download it to the development board through the serial port, and then use the insmod command to mount the driver. The camera driver can be successfully added. The current driver addition status can be viewed through the lsmod command.
4 Video Capture
The system software is developed based on VFL, and the basic process principle is shown in Figure 3. The most critical step is the acquisition of video data, which is generally implemented in two ways: direct reading and memory mapping.
1) Define the data structure
Some data structures need to be defined in the program, such as: video_capability, which contains basic information about the camera; video_picture, which contains various properties of the device's image acquisition; video_mmap, which is used for memory mapping; video_mbuf, which uses mmap to map frame information, which is actually the frame information input into the camera's memory buffer; video_Window, which includes various parameters of the device's acquisition window. [page]
In Linux system, devices are regarded as device files. In user space, device files can be operated through standard I/O system call functions to achieve the purpose of communication and interaction with devices. Ioctl functions are used to control I/O channels.
2) Collection program implementation process
1. Open the video device
In Linux, the device file corresponding to the video device is /dev/video0, and the open function is used to open the video device.
2. Get device information and video information and set them up
After opening the device file, the device information and image information are obtained by calling the camera_get_capability() and camera_get_picture() functions. Both functions obtain the relevant information of the device and the image by calling the ioctl() function, and put the obtained information into the video_capability structure. If the image information needs to be set, first reassign the variables to be modified in the video_picture data structure object, and then set it through the VIDIOCGPICT of the ioctl function. The properties of the captured image can be set by calling ioctl VIDIOCGPICT.
3. Set the window height and width
The encoder input is a YUV420 code stream in CIF format, so the acquisition window height is set to 288 and the width is set to 352.
4. Get video frames
Use mmap() (memory mapping) to capture video. The mmap() system call enables processes to share memory by mapping the same ordinary file. [5]
The main parts are introduced as follows:
a. Initialization and setup
Use the ioctl(camera_fd, VIDIOCGMBUF, &camera_mbuf) function to initialize video_mbuf, obtain the frame information of the camera storage buffer, and then modify the settings of video_mmap and frame status.
b. Implement the mapping of camera device files to memory areas
Call buf = void *mmap (void *addr, size_t len, int prot, int flags, int fd, off_t offset) function to map the contents of the device file to the memory area.
c. Data collection
Call ioctl (fd, VIDIOCMCAPTURE, &camera_buf) to capture the image. If it fails, it will return -1. If the function is successfully called, it will start capturing a frame of image data and add 1 to the current frame number modulo the total number of frames in the buffer to prepare for the next frame capture. Then call the ioctl (fd, VIDIOCSYNC, &frame) function. If it returns successfully, it means that the image capture has been completed and the next frame of image acquisition can be started. The image capture function v41_frame_grab() is a specific implementation of capturing video data using mmap memory mapping. Each time, one frame of raw image data in YUV420P format is captured. When using double buffer rotation acquisition, continuous frame acquisition is performed for each buffer. This is achieved by controlling the number of times the camera frame buffer is captured by an external loop to achieve the purpose of improving efficiency [6].
On this basis, continuous frame acquisition can also be realized. Video4Linux supports up to 32 frames at a time. First, you need to set the number of frames to be acquired, camera_buf.frame, and define the starting position of each frame of data in the memory by data+camera_mbuf.offsets[frame]. You can use ioctl (fd, VIDIOCGMBUF, &camera_mbuf) to obtain the information of camera_mbuf. In addition, you need to set the size of the data buffer, and then use ioctl VIDIOCMCAPTURE operation to continuously acquire data until the remaining space in the buffer cannot save a complete data frame. When there is no available space in the buffer, the system calls ioctl VIDIOCSYNC to check whether the video acquisition process is completed. If it is completed, the application allocates an address for the data frame so that the data frame in the buffer can be safely used for other processes.
4. Turn off the video device
After the acquisition is completed, you need to close the device and reclaim system resources. If the memory mapping method is used for video acquisition, the munmap function must be used to close the mapped memory after the system task is completed. The close function can close the video device file.
5 Multithreaded Design of Video Capture System
In the design of the acquisition and processing module, two threads, image acquisition and image processing, are created, and two buffers are opened to alternately acquire image frames in order to solve the synchronization between the video acquisition module and the encoding module. After the acquisition program fills buffer 1, the thread waiting condition is changed to release the blocked image processing thread to encode and output the buffer data. At the same time, the acquisition thread goes to buffer 2. If the image processing thread has completed the processing of buffer 2 at this time, the acquired frame image is overwritten and saved to buffer 2, otherwise it is blocked. The two buffers are used in turn without discarding any frames, and image acquisition and processing are performed synchronously, which improves efficiency.
6 Conclusion
This paper gives the analysis and research of video acquisition technology in remote monitoring system and obtains experimental results. The S3C2440 processor USB Host controller is compatible with USB1.1 standard and supports low-speed 1.5Mbps and full-speed 12Mbps USB devices. Experiments show that the video acquisition program has the highest efficiency for CIF\\QVGA format image acquisition, with acquisition rates reaching 9fps and 12fps respectively, which are close to the limit rate in full-speed mode. The image acquisition efficiency of QCIF format is low, far from the theoretical value of USB1.1 full-speed transmission, which is related to the hardware characteristics of the camera (including the characteristics of the image sensor and the processing of the image format by the DSP bridge chip) and the implementation of the driver. However, judging from the acquisition frame rate alone, the CIF acquisition speed of 9fps and the QCIF acquisition speed of 24fps can already meet the requirements of general embedded real-time applications.
Previous article:Experts reveal: STM32 boot process complete solution
Next article:ARM "enters" low-power system design
- Popular Resources
- Popular amplifiers
Professor at Beihang University, dedicated to promoting microcontrollers and embedded systems for over 20 years.
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- Let's talk about five cents: Today's manager is standing at work
- 【DFRobot wireless communication module】Hardware analysis
- Teach you how to design an accurate and thermally efficient wearable body temperature detection system?
- EtherCAT Interface Reference Design for High-Performance MCUs
- Popular Science: Why is the PPS charger not compatible with laptop batteries?
- US NB frequency band
- TI Wireless White Paper - Semiconductor Technology Evolving for Modern Telemedicine Applications
- Why GaN is the focus of future industries
- I would like to ask how these magical formulas are derived.
- STM32F767 Bluetooth module communication problem