Infrared Video Network Transmission System Based on SOPC

Publisher:gamma13Latest update time:2015-03-14 Keywords:SOPC Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere
Summary

About 70% of the information received by humans comes from vision. Therefore, image acquisition and processing equipment plays an important role in people's daily lives. With the popularization of the Internet and the improvement of bandwidth, network transmission of images has become possible, and the scope of application is becoming wider and wider, which has brought great convenience to people's daily lives. At the same time, with the development of uncooled infrared technology, infrared thermal imaging systems have been widely used in military and civilian fields. According to the 2006 Infrared Market Report released by Maxtech International, an authoritative research organization in the US infrared market, the average growth rate of global civilian infrared thermal imagers from 2003 to 2006 was 17%, and it is showing a broader market demand.

It can be seen that the easy-to-use infrared video network transmission system has good application prospects and is suitable for forest fire prevention, monitoring systems, power equipment, aerospace, petrochemicals, construction, metallurgy, transportation, border and coastal defense, etc.

The embedded infrared network video transmission system is mainly composed of hardware and software. The hardware part consists of video acquisition module, video codec module, image processing module, data compression module, network transmission module, etc. The software part adopts embedded operating system design.

Infrared network video transmission has good application prospects and commercial value. The core of this project lies in infrared processing. Although the infrared processing algorithm is relatively mature, there are still many drawbacks. Therefore, this project strives to propose innovations based on the original algorithm.

This project plans to use Virtex2 or Spartan3e chips to integrate infrared processing algorithm modules into the FPGA, including filtering, non-uniformity correction, grayscale stretching, pseudo-color enhancement and other parts. Each algorithm module is designed through the ISE development platform, and the user IP is integrated into the hardware system through XPS. The processed image or video information is transmitted to the PC client via Ethernet through the built-in MAC address and IP.

Project Information

1. Project Name: Infrared Video Network Transmission System Based on SOPC

2. Application areas: forest fire prevention, monitoring systems, power equipment, aerospace, petrochemicals, construction, metallurgy, transportation, border and coastal defense

3. System platform overview and resource analysis:

The hardware system diagram is as follows:

 

Figure 1 Infrared video network transmission system based on SOPC

 

Figure 1 Infrared video network transmission system based on SOPC

This framework adopts the FPGA+ARM solution, which can be divided into three parts: front-end image acquisition module (composed of CPLD, A/D, CCD, etc.); FPGA module (using Spartan-3A DSP XA3SD1800A) and ARM module. The image acquisition module acquires infrared images and sends the acquired image grayscale data to the FPGA module for processing. The processed data is sent to the ARM module for image control and display. In the preliminary design stage, the ARM module can be ignored and the display can be directly controlled by the FPGA (the ARM module interface is not refined in this framework). The overall framework diagram is shown in Figure 1, in which the internal graphics processing algorithm module of the FPGA is refined, and the data processing flow is analyzed as follows.

The data flow in the FPGA module is analyzed as follows: Microblaze soft-core CPU interacts with the front-end acquisition module (CPLD, A/D, CCD) to collect data into DDR through dual-port RAM0 (cache). Microblaze loads the collected image by controlling the PLB bus and sends it to the algorithm processing module for processing. The algorithm processing module first performs boundary expansion (this step can also be omitted). Boundary expansion is completed by mirror reflection of the boundary data of a frame of image, that is, the expansion storage of boundary data is realized, which can be completed without data calculation. After expansion, the data is cached in RAM0 and mean filtering can be performed. The mean filtering requires 8 additions and 1 multiplication for noise reduction of each pixel. 384×288 pixels can be processed in parallel, and the processed data is sent to RAM1 for the next step of non-uniform correction. In non-uniform correction, the correction gain and correction offset are generated by high and low temperature calibration before temperature measurement. The two correction factors can be obtained by parallel calculation. The obtained factors are stored in RAM1 so that they can be directly loaded during non-uniform correction. In non-uniform correction, 384×288 pixels can be processed in parallel, and each pixel requires 1 multiplication and 1 addition operation. The processed data is sent to RAM2 for the next step of temperature calibration and grayscale stretching. Temperature calibration and grayscale stretching can be processed in parallel. Since temperature calibration and grayscale stretching require processing of the entire frame of the image, if you want to reduce the capacity of RAM2, you can consider storing the image in DDR and reading it out when needed. In grayscale stretching, first perform histogram statistics, count the number of pixels of each grayscale value of the image, find the effective grayscale range of the image, and find the minimum min and maximum max. Substitute the maximum and minimum values ​​into the slope calculation formula of the three segments. After calculating the slope, grayscale stretching can be performed. Grayscale stretching requires 2 comparisons, 1 subtraction, 1 multiplication and 1 addition for each pixel. 384×288 pixels can be processed in parallel. After the grayscale stretching is completed, the data is sent to RAM3 for the next pseudo-color processing. Pseudo-color is to convert the grayscale value of each pixel into three components corresponding to R, G, and B, so 384×288 pixels can be processed in parallel. The conversion of R, G, and B of each pixel can also be calculated in parallel. The converted data is sent to the dual-port RAM1, and then read out by the RAM module and stored in the SD card for control display. It can also be directly sent to the LCD display when developing the FPGA module in the early stage.

During the graphics processing process, since FPGA can maximize parallel computing, we can not only consider the parallel processing between multiple pixels within the algorithm module, but also the parallelism between algorithm modules. For example, when filtering a frame of image, we can also correct the previous frame of image at the same time.

Each algorithm module is implemented as follows

 

Figure 1 Infrared video network transmission system based on SOPC

 

Figure 2 Correction coefficient

 

Figure 1 Infrared video network transmission system based on SOPC

 

Figure 3 Correction offset

 

Figure 4 Non-uniformity correction

 

Figure 4 Non-uniformity correction

 

Figure 5 Grayscale stretching

 

Figure 5 Grayscale stretching

 

Figure 6 Mean filtering

 

Figure 6 Mean filtering

4. Innovations and key technologies:

Accurate temperature calibration technology: Develop high-performance embedded software to calibrate the temperature of each pixel of the imager. The temperature calibration is stable, reliable and highly accurate.

Real-time non-uniformity correction algorithm and implementation of infrared focal plane array: Solving the non-uniformity problem of focal plane array is particularly important. While conducting in-depth research on traditional non-uniformity, the project has developed a variety of adaptive correction algorithms to improve the correction accuracy of infrared imaging systems;

Image noise preprocessing technology: Since noise is mixed into the image during the acquisition and transmission process, if it is not eliminated in advance, it will further affect the image processing and display effect. Therefore, the image is filtered and preprocessed to eliminate the noise mixed in the image, laying the foundation for subsequent image processing and display;

Development of embedded systems: The project uses a dual-core architecture based on FPGA+ARM. This improves the efficiency of data transmission and the stability of the system, and obtains the real-time performance of the entire infrared imaging system;

Infrared image enhancement processing technology: Based on the implementation of the correction algorithm, the project proposes to conduct image enhancement processing research to improve the image contrast and further improve the image visual effect. The application of infrared image pseudo-color processing uses the sensitivity of human vision to color resolution to mark targets of different gray levels in the image with different colors to highlight the target details and features, further improving the image target resolution ability and image display quality.

Keywords:SOPC Reference address:Infrared Video Network Transmission System Based on SOPC

Previous article:Electrical Control of Injection Molding Machine Based on PLC
Next article:Six Tips for Transferring PCB Schematics to Layout

Latest Power Management Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号