About 70% of the information received by humans comes from vision. Therefore, image acquisition and processing equipment plays an important role in people's daily lives. With the popularization of the Internet and the improvement of bandwidth, network transmission of images has become possible, and the scope of application is becoming wider and wider, which has brought great convenience to people's daily lives. At the same time, with the development of uncooled infrared technology, infrared thermal imaging systems have been widely used in military and civilian fields. According to the 2006 Infrared Market Report released by Maxtech International, an authoritative research organization in the US infrared market, the average growth rate of global civilian infrared thermal imagers from 2003 to 2006 was 17%, and it is showing a broader market demand.
It can be seen that the easy-to-use infrared video network transmission system has good application prospects and is suitable for forest fire prevention, monitoring systems, power equipment, aerospace, petrochemicals, construction, metallurgy, transportation, border and coastal defense, etc.
The embedded infrared network video transmission system is mainly composed of hardware and software. The hardware part consists of video acquisition module, video codec module, image processing module, data compression module, network transmission module, etc. The software part adopts embedded operating system design.
Infrared network video transmission has good application prospects and commercial value. The core of this project lies in infrared processing. Although the infrared processing algorithm is relatively mature, there are still many drawbacks. Therefore, this project strives to propose innovations based on the original algorithm.
This project plans to use Virtex2 or Spartan3e chips to integrate infrared processing algorithm modules into the FPGA, including filtering, non-uniformity correction, grayscale stretching, pseudo-color enhancement and other parts. Each algorithm module is designed through the ISE development platform, and the user IP is integrated into the hardware system through XPS. The processed image or video information is transmitted to the PC client via Ethernet through the built-in MAC address and IP.
Project Information
1. Project Name: Infrared Video Network Transmission System Based on SOPC
2. Application areas: forest fire prevention, monitoring systems, power equipment, aerospace, petrochemicals, construction, metallurgy, transportation, border and coastal defense
3. System platform overview and resource analysis:
The hardware system diagram is as follows:
Figure 1 Infrared video network transmission system based on SOPC
This framework adopts the FPGA+ARM solution, which can be divided into three parts: front-end image acquisition module (composed of CPLD, A/D, CCD, etc.); FPGA module (using Spartan-3A DSP XA3SD1800A) and ARM module. The image acquisition module acquires infrared images and sends the acquired image grayscale data to the FPGA module for processing. The processed data is sent to the ARM module for image control and display. In the preliminary design stage, the ARM module can be ignored and the display can be directly controlled by the FPGA (the ARM module interface is not refined in this framework). The overall framework diagram is shown in Figure 1, in which the internal graphics processing algorithm module of the FPGA is refined, and the data processing flow is analyzed as follows.
The data flow in the FPGA module is analyzed as follows: Microblaze soft-core CPU interacts with the front-end acquisition module (CPLD, A/D, CCD) to collect data into DDR through dual-port RAM0 (cache). Microblaze loads the collected image by controlling the PLB bus and sends it to the algorithm processing module for processing. The algorithm processing module first performs boundary expansion (this step can also be omitted). Boundary expansion is completed by mirror reflection of the boundary data of a frame of image, that is, the expansion storage of boundary data is realized, which can be completed without data calculation. After expansion, the data is cached in RAM0 and mean filtering can be performed. The mean filtering requires 8 additions and 1 multiplication for noise reduction of each pixel. 384×288 pixels can be processed in parallel, and the processed data is sent to RAM1 for the next step of non-uniform correction. In non-uniform correction, the correction gain and correction offset are generated by high and low temperature calibration before temperature measurement. The two correction factors can be obtained by parallel calculation. The obtained factors are stored in RAM1 so that they can be directly loaded during non-uniform correction. In non-uniform correction, 384×288 pixels can be processed in parallel, and each pixel requires 1 multiplication and 1 addition operation. The processed data is sent to RAM2 for the next step of temperature calibration and grayscale stretching. Temperature calibration and grayscale stretching can be processed in parallel. Since temperature calibration and grayscale stretching require processing of the entire frame of the image, if you want to reduce the capacity of RAM2, you can consider storing the image in DDR and reading it out when needed. In grayscale stretching, first perform histogram statistics, count the number of pixels of each grayscale value of the image, find the effective grayscale range of the image, and find the minimum min and maximum max. Substitute the maximum and minimum values into the slope calculation formula of the three segments. After calculating the slope, grayscale stretching can be performed. Grayscale stretching requires 2 comparisons, 1 subtraction, 1 multiplication and 1 addition for each pixel. 384×288 pixels can be processed in parallel. After the grayscale stretching is completed, the data is sent to RAM3 for the next pseudo-color processing. Pseudo-color is to convert the grayscale value of each pixel into three components corresponding to R, G, and B, so 384×288 pixels can be processed in parallel. The conversion of R, G, and B of each pixel can also be calculated in parallel. The converted data is sent to the dual-port RAM1, and then read out by the RAM module and stored in the SD card for control display. It can also be directly sent to the LCD display when developing the FPGA module in the early stage.
During the graphics processing process, since FPGA can maximize parallel computing, we can not only consider the parallel processing between multiple pixels within the algorithm module, but also the parallelism between algorithm modules. For example, when filtering a frame of image, we can also correct the previous frame of image at the same time.
Each algorithm module is implemented as follows
Figure 2 Correction coefficient
Figure 3 Correction offset
Figure 4 Non-uniformity correction
Figure 5 Grayscale stretching
Figure 6 Mean filtering
4. Innovations and key technologies:
Accurate temperature calibration technology: Develop high-performance embedded software to calibrate the temperature of each pixel of the imager. The temperature calibration is stable, reliable and highly accurate.
Real-time non-uniformity correction algorithm and implementation of infrared focal plane array: Solving the non-uniformity problem of focal plane array is particularly important. While conducting in-depth research on traditional non-uniformity, the project has developed a variety of adaptive correction algorithms to improve the correction accuracy of infrared imaging systems;
Image noise preprocessing technology: Since noise is mixed into the image during the acquisition and transmission process, if it is not eliminated in advance, it will further affect the image processing and display effect. Therefore, the image is filtered and preprocessed to eliminate the noise mixed in the image, laying the foundation for subsequent image processing and display;
Development of embedded systems: The project uses a dual-core architecture based on FPGA+ARM. This improves the efficiency of data transmission and the stability of the system, and obtains the real-time performance of the entire infrared imaging system;
Infrared image enhancement processing technology: Based on the implementation of the correction algorithm, the project proposes to conduct image enhancement processing research to improve the image contrast and further improve the image visual effect. The application of infrared image pseudo-color processing uses the sensitivity of human vision to color resolution to mark targets of different gray levels in the image with different colors to highlight the target details and features, further improving the image target resolution ability and image display quality.
Previous article:Electrical Control of Injection Molding Machine Based on PLC
Next article:Six Tips for Transferring PCB Schematics to Layout
- Popular Resources
- Popular amplifiers
- MathWorks and NXP Collaborate to Launch Model-Based Design Toolbox for Battery Management Systems
- STMicroelectronics' advanced galvanically isolated gate driver STGAP3S provides flexible protection for IGBTs and SiC MOSFETs
- New diaphragm-free solid-state lithium battery technology is launched: the distance between the positive and negative electrodes is less than 0.000001 meters
- [“Source” Observe the Autumn Series] Application and testing of the next generation of semiconductor gallium oxide device photodetectors
- 采用自主设计封装,绝缘电阻显著提高!ROHM开发出更高电压xEV系统的SiC肖特基势垒二极管
- Will GaN replace SiC? PI's disruptive 1700V InnoMux2 is here to demonstrate
- From Isolation to the Third and a Half Generation: Understanding Naxinwei's Gate Driver IC in One Article
- The appeal of 48 V technology: importance, benefits and key factors in system-level applications
- Important breakthrough in recycling of used lithium-ion batteries
- LED chemical incompatibility test to see which chemicals LEDs can be used with
- Application of ARM9 hardware coprocessor on WinCE embedded motherboard
- What are the key points for selecting rotor flowmeter?
- LM317 high power charger circuit
- A brief analysis of Embest's application and development of embedded medical devices
- Single-phase RC protection circuit
- stm32 PVD programmable voltage monitor
- Introduction and measurement of edge trigger and level trigger of 51 single chip microcomputer
- Improved design of Linux system software shell protection technology
- What to do if the ABB robot protection device stops
- How Lucid is overtaking Tesla with smaller motors
- Wi-Fi 8 specification is on the way: 2.4/5/6GHz triple-band operation
- Wi-Fi 8 specification is on the way: 2.4/5/6GHz triple-band operation
- Vietnam's chip packaging and testing business is growing, and supply-side fragmentation is splitting the market
- Vietnam's chip packaging and testing business is growing, and supply-side fragmentation is splitting the market
- Three steps to govern hybrid multicloud environments
- Three steps to govern hybrid multicloud environments
- Microchip Accelerates Real-Time Edge AI Deployment with NVIDIA Holoscan Platform
- Microchip Accelerates Real-Time Edge AI Deployment with NVIDIA Holoscan Platform
- Melexis launches ultra-low power automotive contactless micro-power switch chip
- [Jihai APM32E103VET6S MINI Development Board Review] Part 4: Key Interrupt
- Technology Popularization: Do you know why base stations are painted in colors?
- Usage of ^ in Verilog
- A brief list of long and short distance wireless communication technologies
- A classic foreign book about semaphores
- If there is a hidden function on the right side, the reading area will be larger.
- Online ESP32 simulator is awesome! It takes more than ten minutes to help netizens light a lamp
- STLINK power supply has problems
- ANT Visualization
- pwm