Brief analysis: How to use part of the light to achieve fog-penetrating monitoring

Publisher:梦幻之光Latest update time:2011-12-14 Source: 中国安防展览网 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

In recent years, as end users have higher and higher requirements for image quality, the camera market has gradually moved from the past homogeneous low-end vicious competition to a differentiated and personalized development path. The market segments have seen the emergence of professional products such as ultra-low illumination cameras, ultra-wide dynamic cameras, license plate cameras, and fog-penetrating cameras to meet the needs of end users in different fields. Among them, fog-penetrating cameras have the highest technical content and the largest profit margin. With the increasing number of extreme weather in recent years, long periods of fog have occurred around the world, and the application of fog-penetrating cameras has been greatly improved.

Development prospects of fog penetration technology

On ships, boats, and aircraft, the sighting and observation system plays a very important role in perceiving the surrounding situation. The sighting and observation system is generally composed of a CCD camera and an infrared imaging system. Harsh marine meteorological environments such as fog, moisture, rain, and snow will seriously affect the image quality of the CCD and infrared imaging systems, mainly manifested in decreased image contrast, and blurred and difficult to distinguish targets in the distance, thus affecting the ability to perceive the surrounding situation.

The contrast of the image is improved through image processing algorithms, which is also called video anti-reflection technology. It has been widely used in foreign countries, especially in the observation and aiming systems of the United States. The comparison effect before and after image processing is as follows:

From this contrasting effect picture, we can clearly feel that through the anti-reflection processing of the video, the contrast of the image has been greatly improved, and the originally blurred ship has become more clearly visible, thereby increasing the observation distance of the sighting system and improving the system's ability to perceive the surrounding situation. Therefore, video anti-reflection technology has a good application prospect in the sighting system of ships and aircraft. However, due to the limitations of algorithms and hardware implementation technology, the application of this video anti-reflection technology has just started in my country, and commercial mature products are rare.

The ocean environment is extremely harsh, with fog, rain, and water vapor being common, and the sighting system needs to be able to observe distant, tiny, and high-speed moving targets in a timely manner. If the target cannot be discovered in time, the enemy may be in a passive state. Therefore, it is very necessary to equip this video anti-reflection equipment to improve the observation ability of the sighting system.

Lens defogging technology

In recent years, the use of video surveillance equipment to protect security has become a necessary means for all walks of life. However, traditional video surveillance equipment has a disadvantage without exception, that is, the monitoring effect is very unsatisfactory at night and in foggy days, and night and foggy days are the times when more cases occur. In addition, for monitoring at a slightly longer distance, there is almost no information.

The principle of fog penetration is this: within the range of invisible light, there is a frequency of light that can penetrate fog, but due to the different wavelengths, it needs to be processed on the camera to achieve the purpose of focusing it. At the same time, the camera needs to be redesigned to image the invisible light of this frequency. Since this invisible light has no corresponding visible light color map, the image presented on the monitor is black and white. Shooting objects through clouds, fog, and water vapor is equivalent to shooting through two lenses (water droplets and actual lenses). Except that the R light can be correctly focused on the CCD imaging surface, the GB in the RGB light cannot be projected normally on the CCD imaging surface, which makes it impossible for the normal mode lens to obtain normal and clear images in clouds, fog, and water vapor.

In the past, when CCTV lenses were still at the stage of less than 300mm, the observation distance was generally limited to within 1 km. This application had relatively low requirements for weather visibility. However, today, the focal length has developed to 750mm, and the impact of fog on monitoring images has to attract our attention. This situation is especially important in long-distance monitoring such as highways, forest fire prevention, oil field monitoring, and ports close to the sea. This environment is often more prone to fog, which makes 24-hour uninterrupted monitoring face new challenges.

In response to this situation, a few manufacturers with design and R&D capabilities have worked hard to develop lenses with defogging functions and successfully launched finished products on the market. The emergence of this technology has greatly broadened the application scope of video surveillance and is another classic example of human beings relying on intelligence to overcome the natural environment. A few manufacturers on the market do not have the ability to produce defogging lenses, but use ordinary products as defogging lenses to sell them and claim that they have defogging functions. This is extremely irresponsible. Of course, they cannot pass the actual test and will eventually be eliminated, but it creates many obstacles and wastes a lot of time for users who need this function in product selection.

Video anti-fog technology

Video anti-reflection technology generally refers to making images that are hazy due to fog, water vapor, dust, etc. clear, emphasizing certain interesting features in the image, suppressing uninteresting features, improving the image quality and enriching the information. The enhanced image provides good conditions for the next application of the image. Generally speaking, there are two types of anti-reflection technology: spatial domain and frequency domain methods. However, there are still some defects in the adaptability of these methods to different images. In the 1970s, American physicist Land et al. proposed the Retinex image enhancement method, which is an image processing model based on human visual perception. It can compress the dynamic range of the image and display the obliterated details in the image. However, the algorithm is complex and the engineering implementation is difficult, especially for real-time enhancement of real-time video, because the amount of calculation is large, it is difficult to apply in practice. With the improvement of hardware performance, we can finally turn this universally adaptable image enhancement algorithm into a practical engineering product. This is the first hardware product of the Retinex algorithm implemented in the industry.

The Retinex algorithm is a model based on the human visual system's perception and adjustment of object color and brightness. This model explains the phenomenon that the wavelength and brightness of color are not particularly corresponding to the human eye, which cannot be explained by general color theory. Land has proved through a large number of experiments that the surface color I mentioned will not change due to changes in lighting conditions, that is, color constancy. Simply put, color constancy means that whether it is in the midday sun, under incandescent light, or in dim lighting conditions, humans perceive the same object to have the same color. For this reason, when performing image operations, some uncertain and non-essential influences such as light intensity and uneven illumination should be removed, and only the essential reflective properties of the object, such as reflectivity, should be retained. Images processed based on this method can achieve good effects in edge sharpening, dynamic range compression, and color constancy.

The basic idea of ​​Retinex theory is to regard the original image as composed of the illumination image and the object reflection properties. The illumination image directly determines the dynamic range that the pixels in an image can reach, and the object reflection properties determine the intrinsic properties of the image. Therefore, the basic idea of ​​Retinex theory is to remove or reduce the influence of the illumination image in the original image so as to retain the essential reflection properties. Compared with other image enhancement methods, the Retinex algorithm has the characteristics of sharpening, color constancy, large dynamic range compression, and high color fidelity.

At present, the multi-part video enhancer only uses the global Retinex image enhancement algorithm to calculate the ratio between the grayscale values ​​of adjacent pixels in the logarithmic domain to obtain the relative brightness relationship between adjacent pixels, and then corrects the grayscale value of the original pixel through the brightness relationship, and finally linearly stretches the corrected pixel grayscale value to obtain an enhanced image. Therefore, the contrast of the enhanced image is not high. The CASEVisionVE9901 video enhancer adopts the advanced multi-scale Retinex image enhancement algorithm, which has strong universality. At the same time, it provides optimized logarithmic histogram equalization processing and a variety of noise filtering algorithms. It is based on the embedded hardware structure of DSP, with the advantages of small size, low power consumption and high performance. It also processes images in real time and automatically adapts to PAL and NTSC video images. It also has extremely low delay, which does not exceed one frame, that is, the PAL video delay is 40ms and the NTSC video delay is 33ms. At the same time, it also supports full-screen enhancement and local window enhancement, and the size and position of the local enhancement window can be dynamically adjusted.

Reference address:Brief analysis: How to use part of the light to achieve fog-penetrating monitoring

Previous article:Breaking through the confusion of gun-type cameras and showing the new life of integrated surveillance cameras
Next article:Practice: Production and installation technology of home surveillance cameras

Latest Security Electronics Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号