Design of camera driver based on Video4Linux

Publisher:BlissfulAuraLatest update time:2024-07-19 Source: eepwKeywords:Video4Linux Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

0 Introduction
With the rapid development of multimedia technology, network technology and the advent of the post-PC era, it has become possible to use embedded systems to realize applications such as remote video monitoring, videophone and video conferencing. In order to realize these applications, real-time video data acquisition is an important link. In view of this, this paper implements camera driver development on an embedded system platform based on Video4Linux (V4L) technology, and describes in detail the V4L technology and the camera Linux driver design on the Blackfin536 DSP platform.


1 Video4Linux
V4L is the foundation of Linux video streaming system and embedded video system. The application of Linux in TV and multimedia is a very popular research field, and the most critical technology is Linux V4L. V4L is a set of APIs in the Linux kernel that supports video devices. With the appropriate camera and camera driver, we can realize functions such as video acquisition, AM/FM wireless broadcasting, video CODEC, channel switching, etc. At present, the most important application is in video streaming system and embedded video system, and its application range is quite wide, such as: distance teaching system, distance diagnosis system, video conferencing, etc.


At present, the V4L interface has evolved into V4L 2. The former is simpler than the latter, but it has two shortcomings: the driver cannot open multiple devices at the same time when designing; the existing V4L API cannot support devices with encoding capabilities well. In order to facilitate the study of camera driver development, this article still uses V4L.
The main structure of V4L is as follows:

camera_open and camera_close are used to open and close the video acquisition device; camera_read is used to read the video image; the main control interface of the video driver is implemented through ioctl, such as the image format, brightness, chroma and other information are obtained and set through ioctl functions. The ioctl command part is as follows:

V4L supports two ways to capture images: mmap (direct reading) and read. This system uses mmap. The size and depth of the image need to be set in advance, and then the VIDIOCGMBUF command is used, which will return the size of the buffer used for mmap and the offset address of the buffer for each frame. The mapping function in the driver is static intbf536_v411_mmap (Struct file*filp, structvm_area_struct*vma); after the frame is captured, the image data is read through memory mapping in the application.


2 Hardware Platform
This system uses CMOS analog sensors, and the analog signals are converted into ITU-R BT. 656 video signals by TI's TVP5150A video decoder chip, and then sent to ADI's Blackfin536 DSP processor for image processing, as shown in Figure 1. TVP5150 provides the sampling clock signal CLK (27MHz) to the DSP, and 8 data lines transmit ITU-R BT. 656 format data with embedded synchronous control code streams. The DSP configures the video decoder through the I2C interface. The video data is moved to the SDRAM by DMA.

3 Driver Design
3.1 Introduction to the Driver Core Structure
The software platform of this system is embedded uclinux, so the camera is loaded into the uclinux kernel as a device driver. Generally speaking, each device driver has its core structure. The core structure of the camera driver is designed as follows: struct camera device

This structure stores almost all the information related to the camera video image. The structures videoDev and videoV4l1 are associated with V4L, ppiDev is related to the hardware configuration of the ADSP-BF537 processor, and frame[CAMERA_NUMFRAMES] is associated with the current video frame data during acquisition. In addition, the core structure of camera_device also defines member variables frame_field representing the odd and even fields, member variables grabbing representing the grabbing status of the current frame, etc.
3.2 Hardware Configuration
The main difficulty in the development of this system lies in the hardware configuration. Correctly configuring TVP5150, PPI, and DMA requires understanding the working principle of the entire camera and the basic knowledge of various image formats. This article mainly gives several key configuration options for PPI and DMA.
3.2.1 PPI Configuration
The TVP5150A video decoder chip converts analog signals into ITu-RBT. 656 video signals. ITU-R BT. 656 is a digital studio standard for a 4:2:2 parallel interface. For the PAL system (similar to the NTSC system), one frame of image includes two fields of video data (odd and even fields), and each field of image consists of four parts: effective video data, horizontal blanking, vertical blanking and control word. The PPI interface can support three types of data transmission in the ITU-656 input mode. If the effective video data is selected, the seamless connection between PPI and TVP5150A decoder can be realized. And the effective video data transmitted is in UYVY422 format, so the PPI control register is configured for ITu-656 input and transmission of effective video data. Each field of effective video data transmitted by PPI consists of 288 lines, and each line has 1440 sample words, including 720 brightness Y sample words, 360 blue color difference Cb, and 360 red color difference Cr, arranged in the order of Cb, Y, Cr, and Y. The UYVY422 format data is: each pixel takes the brightness value, while the blue color difference and red color difference are taken one for every two pixels, and the two are taken alternately, so the image pixels are 720×576. In this way, ppi frame=576 (the number of rows in the whole image). And ppi count does not need to be configured, because there are H and V signals in the ITU-R BT. 656 video signal.
3.2.2 DMA configuration
This system uses two-dimensional DMA to increase the data transmission speed, using 16-bit transmission, and an interrupt is generated after a video data is transmitted. dma_x_count=720 (equivalent to the number of times each line of data needs to be transmitted, 720 pixels per line, and 2 bytes per pixel. Each line needs to transmit 720×2 bytes, and dma is 16-bit transmission, so x_count=720). dma_x_modify=2 (the offset address of two adjacent data transmissions, in bytes, because it is 16-bit transmission, so it is 2).
Since PAL video data is interlaced, each frame is divided into two fields, odd and even. The two fields are separated in the time domain, but the two fields need to be combined into one frame for processing during data processing. Therefore, in order to reduce the CPU processing time, DMA can be used to directly perform field synthesis. In the transmission of one field, after DMA transfers one line of data, it reserves a row of storage space and stores the data at the address of the third row. After one field of data is transmitted, the next field of data fills the storage space reserved for the previous field, which is also interlaced storage, so that the two fields of data are combined into one frame of data. Therefore, dma_y_modify=1442 (720 pixels per line, occupying 720×2 bytes. It takes 2 bytes from the end address of one line to the first address of the next line. So 720×2+2=1442). At the same time, the setting interval of the starting address for the storage of the two fields of data is also 1442 bytes.
3.3 Interrupt service subroutine
The interrupt service subroutine of this system mainly generates an interrupt after a field of data is collected, and makes corresponding processing according to the odd and even fields of the data. Its process is shown in Figure 2.

4 Conclusion
This paper introduces the architecture and implementation of the camera driver on the Blackfin DSP and Linux platform. The driver is tested by the test program and can work normally. The driver still has some shortcomings, that is, the ping-pong operation is not used in the frame capture process of the driver, but two frame buffers are used to access data, so the ping-pong operation can be completed in the upper application.



Keywords:Video4Linux Reference address:Design of camera driver based on Video4Linux

Previous article:Design of RS (204, 188) code continuous encoding
Next article:TD-SCDMA mobile phone RF front-end design

Latest Embedded Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号