A new method for video enhancement

Publisher:masphiaLatest update time:2010-10-23 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

A new approach to video enhancement

A new method for video fog penetration

Fog is a visible collection of a large number of fine water droplets (or ice crystals) suspended in the atmosphere close to the ground, reducing visibility to less than 1 km...

" Fog " is the biggest enemy of video surveillance systems based on the principle of optical imaging. Fog greatly shortens the effective video surveillance distance, making the image blurry. In severe cases, the image is completely white, making the video surveillance system useless.

In recent years, with global warming, accelerated urbanization and increased population density, the fog in cities has become more and more serious. Urban fog has caused great damage to the video environment of urban security monitoring, traffic monitoring, environmental monitoring, etc., and in serious cases, it may cause the city's security system to paralyze ... Due to the respiration of green plants, forests and large areas of vegetation environments are prone to fog covering large areas in the early morning and evening. These fogs have caused great harm to forest safety and fire prevention, making the usual telephoto lens camera system lose its original power ... Due to the evaporation of water vapor, the humidity in coasts, ports, and rivers is high all year round, forming a large amount of water fog that cannot be eliminated. The dense water fog all year round makes it difficult to effectively implement the monitoring of the harbor ... For many years, many optical, electronic, and software experts have been committed to the research of fog removal of video images, but it has been proved that a single solution cannot fundamentally solve this problem.

The new fog-penetrating solution utilizes a highly integrated design approach; by placing dedicated high-precision optical conversion devices and dedicated imaging devices in the camera, combined with dedicated video enhancement equipment developed using the latest technology, it forms an optically and electrically integrated expert-level video fog-penetrating monitoring solution from multiple aspects, including optics, electronics, and software.

The following is an introduction to the electronic part of the defogging:

1. Background of Video Enhancement

Visual information is the main source of information for humans, because about 70% of information is obtained through the human eye. With the rapid development of multimedia technology, video images have been widely valued and applied, and their application fields cover radio and television, medicine, security monitoring, parking lot management, military and life sciences. The improvement of video acquisition technology and display technology has made people's requirements for image quality higher and higher, but the transmission and conversion of images in various image systems (such as imaging, copying, scanning, transmission and display, etc.) will always cause a certain degree of image quality reduction. For example, some outdoor monitoring systems can only work normally under sunny days. In bad weather such as fog, dust or low light conditions, the image contrast is greatly reduced, and people cannot get useful information from it. Not only that, long-term viewing of low-quality videos may increase the burden on people's eyes, easily cause visual fatigue, and even dizziness. When there is severe weather such as heavy fog, heavy rain, and dust, the contrast and color of outdoor scene images will be changed or degraded, and many features contained in the image will be covered or blurred, resulting in degraded images, which will cause great difficulties for all types of monitoring. Therefore, in order to give full play to the effectiveness of surveillance video, it is necessary to enhance the surveillance video image. In terms of military reconnaissance and surveillance, in order to implement correct command and achieve combat victory, modern warfare has put forward higher requirements for military reconnaissance, and widely apply advanced science and technology to further expand the scope of reconnaissance and improve the timeliness and accuracy of reconnaissance. Therefore, the quality of video images used in military reconnaissance and surveillance is particularly important. Degraded video images will cause deviations in the recognition and processing of information, and the consequences of such deviations are very serious. Therefore, video enhancement technology came into being.

2. Basic Principles of Video Enhancement Algorithm (Retinex Algorithm)

Introduction to Retinex Algorithm

Retinex (the abbreviation of "Retina" and "Cortex") theory is an image enhancement theory based on the human visual system (Human Visual System) based on scientific experiments and scientific analysis. The basic principle model of this algorithm was first proposed by Edwin Land in 1971 as a theory called color theory, and a theoretical image enhancement method based on color constancy. The basic content of Retinex theory is that the color of an object is determined by the object's ability to reflect long-wave (red), medium-wave (green) and short-wave (blue) light, rather than by the absolute value of the intensity of the reflected light; the color of an object is not affected by the non-uniformity of illumination and is consistent, that is, Retinex theory is based on color consistency (color constancy).

Different from traditional image enhancement algorithms, such as linear, nonlinear transformation, image sharpening, etc., which can only enhance a certain type of image features, such as compressing the dynamic range of the image, or enhancing the edge of the image, Retinex can achieve a balance between dynamic range compression, edge enhancement, and color constancy, so it can adaptively enhance various types of images. Because of the many good features of Retinex, the Retinex algorithm has been widely used in many aspects.

Among the many algorithms based on Retinex, the Single-Scale Retinex (SSR) algorithm and the Multi-Scale Retinex (MSR) algorithm are the most representative and mature algorithms.

Principle of Single-Scale Retinex (SSR) Algorithm

According to the theory proposed by Land, a given image S(x,y) is decomposed into two different images: the reflected object image R(x,y) and the incident light image L(x,y). The principle diagram is as follows:

Figure 1: Retinex principle diagram

For each point (x, y) in the observed image S, the formula can be expressed as:

S(x,y)=R(x,y)﹒ L(x,y) (1)

According to Retinex theory, the color of an object is determined by its ability to reflect light, which is an inherent property of the object itself and has no dependency on the absolute value of the light source intensity. Therefore, by calculating the relative brightness relationship between each pixel, each pixel in the image can be corrected to determine the color of the pixel.

The single-scale retinex (SSR) algorithm is expressed in the logarithmic domain as:
(2)

According to the principle of formula (2) above, the key to image enhancement using the Retinex theory is to calculate the brightness image L(x,y) from the effective information in the original image. However, calculating the brightness image from the original image is a mathematically singular problem, so the brightness image can only be estimated through mathematical approximate estimation. In the history of the development of the Retinex algorithm, there have been inverse square ring forms, exponential forms, and Gaussian exponential forms, but in the single-scale Retinex enhancement algorithm, Jobson demonstrated that the Gaussian convolution function can provide more localized and accurate processing of the source image, thereby better enhancing the image, which can be expressed as:
(3)
where λ is a constant matrix, c is the filter radius, and satisfies: (4)

The smaller c is, the more the grayscale dynamic range is compressed, and the larger c is, the more the image is sharpened. Therefore, the brightness image can be finally expressed as:
(5)
The single scale (SSR) can be expressed as:
(6)

The current status of video antireflection technology and its application in special environments

[page]

1. Overview

On ships, boats, and aircraft, the sighting and observation system plays a very important role in perceiving the surrounding situation. The sighting and observation system is generally composed of a CCD camera and an infrared imaging system. Harsh marine meteorological environments such as fog, moisture, rain, and snow will seriously affect the image quality of the CCD and infrared imaging systems, mainly manifested in decreased image contrast, and blurred and difficult to distinguish targets in the distance, thus affecting the ability to perceive the surrounding situation.

The contrast of the image is improved through image processing algorithms, which is also called video anti-reflection technology. It has been widely used in foreign countries, especially in the observation and aiming systems of the United States. The comparison effect before and after image processing is as follows:


From this contrasting effect picture, we can clearly feel that through the anti-reflection processing of the video, the contrast of the image has been greatly improved, and the originally blurred ship has become more clearly visible, thereby increasing the observation distance of the sighting system and improving the system's ability to perceive the surrounding situation. Therefore, video anti-reflection technology has a good application prospect in the sighting system of ships and aircraft. However, due to the limitations of algorithms and hardware implementation technology, the application of this video anti-reflection technology has just started in my country, and commercial mature products are rare.

The ocean environment is extremely harsh, fog, rain, water vapor and other weather conditions are common, and the sighting system needs to be able to observe distant, tiny, high-speed moving targets in a timely manner. If the target cannot be discovered in time, the enemy may be in a passive state. Therefore, it is very necessary to equip this video anti-reflection equipment to improve the observation ability of the sighting system.

2. Introduction to Anti-reflection Technology

Video anti-reflection technology generally refers to making images that are hazy due to fog, water vapor, dust, etc. clear, emphasizing certain interesting features in the image, suppressing uninteresting features, improving the image quality and enriching the information. The enhanced image provides good conditions for the next application of the image. Generally speaking, there are two types of anti-reflection technology: spatial domain and frequency domain methods. However, there are still some defects in the adaptability of these methods to different images. In the 1970s, American physicist Land et al. proposed the Retinex image enhancement method, which is an image processing model based on human visual perception. It can compress the dynamic range of the image and display the obliterated details in the image. However, the algorithm is complex and the engineering implementation is difficult, especially for real-time enhancement of real-time video, because the amount of calculation is large, it is difficult to apply in practice. With the improvement of hardware performance, we can finally turn this universally adaptable image enhancement algorithm into a practical engineering product. This is the first hardware product of the Retinex algorithm implemented in the industry.

Below we use actual pictures before and after enhancement to demonstrate the effect of the Retinex algorithm and its universal adaptability to different scenarios.

Figure 1 Contrast of foggy weather enhancement 1

Figure 2 Foggy weather enhancement comparison 2

Figure 3 Underwater

Figure 4 Underwater

Figure 5 Dust weather

Figure 6 Haze weather

Figure 7 Dark weather

From these comparison pictures, we can clearly see that the enhanced images are clearer, the details that were originally difficult to see have become very clear, and the algorithm has very good universality for different meteorological environments.

The Retinex algorithm is a model based on the human visual system's perception and adjustment of object color and brightness. This model explains the phenomenon that the wavelength and brightness of color are not particularly corresponding to the human eye, which cannot be explained by general color theory. Land has proved through a large number of experiments that the surface color I mentioned will not change due to changes in lighting conditions, that is, color constancy. Simply put, color constancy means that whether it is in the midday sun, under incandescent light, or in dim lighting conditions, humans perceive the same object to have the same color. For this reason, when performing image operations, some uncertain and non-essential influences such as light intensity and uneven illumination should be removed, and only the essential reflective properties of the object, such as reflectivity, should be retained. Images processed based on this method can achieve good effects in edge sharpening, dynamic range compression, and color constancy.

The basic idea of ​​Retinex theory is to regard the original image as composed of the illumination image and the object reflection properties. The illumination image directly determines the dynamic range that the pixels in an image can reach, and the object reflection properties determine the intrinsic properties of the image. Therefore, the basic idea of ​​Retinex theory is to remove or reduce the influence of the illumination image in the original image so as to retain the essential reflection properties. Compared with other image enhancement methods, the Retinex algorithm has the characteristics of sharpening, color constancy, large dynamic range compression, and high color fidelity.

At present, the multi-part video enhancer only uses the global Retinex image enhancement algorithm to calculate the ratio between the grayscale values ​​of adjacent pixels in the logarithmic domain to obtain the relative brightness relationship between adjacent pixels, and then corrects the grayscale value of the original pixel through the brightness relationship, and finally linearly stretches the corrected pixel grayscale value to obtain an enhanced image. Therefore, the contrast of the enhanced image is not high. The CASEVision VE9901 video enhancer adopts the advanced multi-scale Retinex image enhancement algorithm, which has strong universality. At the same time, it provides optimized logarithmic histogram equalization processing and a variety of noise filtering algorithms. It is based on the embedded hardware structure of DSP, with the advantages of small size, low power consumption and high performance. It also processes images in real time and automatically adapts to PAL and NTSC video images. It also has extremely low delay, which does not exceed one frame, that is, the PAL video delay is 40ms and the NTSC video delay is 33ms. At the same time, it also supports full-screen enhancement and local window enhancement, and the size and position of the local enhancement window can be dynamically adjusted.


Reference address:A new method for video enhancement

Previous article:Panoramic analysis of high-speed ball technology: intelligent network night vision is the direction
Next article:The Current Status and Development Trend of Video Server (DVS)

Latest Security Electronics Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号