Low illumination image enhancement method based on multi-exposure image generation

Publisher:HappyHeartedLatest update time:2023-07-27 Source: 机器人Author: Lemontree Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

Low-light images can reduce the robustness of many computer vision tasks, seriously affecting many visual tasks in the field, such as image recognition and target tracking. In order to obtain enhanced images with more detailed information and a larger dynamic range, a low-light image enhancement method based on multi-exposure image generation is proposed. By analyzing multi-exposure images taken in real life, this method finds that there is a linear relationship between the pixel values ​​of images with different exposure times, so that the idea of ​​orthogonal decomposition can be applied to multi-exposure image generation. Multi-exposure images are generated according to the physical imaging mechanism and are closer to real images. After decomposing the original image to obtain illumination invariants and illumination components, different illumination components are generated by designing an adaptive algorithm, and then synthesized with the illumination invariant to obtain multi-exposure images. Finally, the multi-exposure image fusion method is used to obtain an enhanced image with a larger dynamic range. The fusion result is consistent with the input image, and the final enhanced image can effectively retain the color of the original image with high naturalness. Experiments were conducted on a public dataset of real low-light images and compared with existing advanced algorithms. The results showed that the structural similarity between the enhanced image obtained by this method and the reference image was improved by 2.1%, and the feature similarity was improved by 4.6%. The enhanced image is closer to the reference image and more natural.

Images captured under low light conditions or when the camera exposure time is insufficient are called low-light images. Low-light images usually have the characteristics of low brightness, low contrast, and blurred structural information, which brings difficulties to many robot vision tasks, such as face recognition, target tracking[1], autonomous driving[2], feature extraction[3], etc. Low-light image enhancement methods can not only improve the visual effect of the image, but also improve the robustness of subsequent robot vision task algorithms, which has important practical application value.

According to whether they need to rely on a large amount of data for training, the existing low-light image enhancement algorithms can be divided into two categories: traditional methods and methods based on. Among the traditional low-light image enhancement methods, the histogram-based method [4-5] improves the contrast of the image by adjusting the histogram of the image to achieve the purpose of enhancing the low-light image. This type of method is simple and efficient, but lacks a physical mechanism and often leads to over-enhancement or under-enhancement of the image, and the image noise will also be significantly amplified. The method based on Renex theory [6] first decomposes the image into illumination component and reflection component, and then enhances them separately. Wang et al. [7] designed a low-pass method to decompose the image into reflection image and illumination image, and used a double logarithmic transformation to enhance the image to balance naturalness and image details. Guo et al. [8] first took the maximum value of the RGB three channels of the low-light image to obtain an initial illumination image, then corrected the initial illumination through structural prior information, used gamma correction to adjust the image brightness, and then synthesized the adjusted illumination image with the reflection image to obtain the final enhancement result. Ren et al. [9] proposed a noise-suppressed sequence evaluation model to estimate the illumination component and the reflection component separately. In this noise-suppressed sequence decomposition process, each component is spatially smoothed and the weight matrix is ​​cleverly used to suppress noise and improve contrast. Finally, the estimated reflection component is combined with the gamma-corrected illumination component to obtain an enhanced image, ultimately achieving the purpose of low-light enhancement and joint denoising.

Deep learning-based methods [10-13] have achieved good results in low-light image enhancement by training on a large amount of data. Lore et al. [10] first proposed a deep autoencoder for contrast enhancement and noise removal to enhance low-light images. Wei et al. [11] combined the Retinex model with depth for low-light image enhancement. Jiang et al. [12] used generative adversarial methods to implement a low-light image enhancement model that does not require paired training data for training. Guo et al. [13] converted the low-light image enhancement task into a specific curve estimation task with a deep network. The training process of these deep learning-based methods usually takes a lot of time and computing resources. In addition, the effectiveness of these methods depends largely on the training data. Inaccurate reference images can affect the training results. For example, in real normal light images, due to uneven lighting, there may be problems of overexposure of local high-light areas or underexposure of local low-light areas.

In low-light image enhancement, uneven illumination is also a problem that needs to be solved. For local low-light images, increasing the image brightness too much will cause the high-light area of ​​the image to be overexposed, while insufficient brightness increase will not be able to show the image details in the low-light area. Thanks to the advancement of camera equipment, it is possible to fix the camera equipment and obtain images with different exposure times in a short period of time, and then fuse a group of images to obtain an image with a larger dynamic range. Wang et al. [14] designed a smooth multi-scale exposure fusion algorithm based on edge information preservation in the YUV color space, which can simultaneously retain the details in the high-light area and the low-light area of ​​the scene. In order to compensate for the detail information lost during the fusion process, a vector field construction algorithm was designed to extract visible image details from the vector field, and this method can avoid color distortion during the image fusion process. Although the image fusion method can effectively improve the image dynamic range, it requires a group of images with different exposure times to be obtained in advance, and cannot enhance a single low-light image. Shooting dynamic scenes or camera shaking during shooting will make it difficult to align the captured images, which will lead to artifacts in the fusion results.

In order to apply the image fusion method to low-light image enhancement to improve the image dynamic range, it is necessary to first generate a set of information for fusion based on a single image. At present, there are some methods that use the idea of ​​image fusion for low-light image enhancement. Among them, Fu et al. [15] first decomposed the image into an illumination image and a reflection image using an illumination estimation algorithm based on morphological closure, and then used the Sigmoid function and adaptive histogram equalization algorithm to process the illumination image to obtain an illumination image with enhanced brightness and an illumination image with enhanced contrast. The two enhanced illumination images were fused and then synthesized with the reflection image to obtain the final enhanced image. Cai et al. [16] collected 589 sets of multi-exposure images and used 13 existing methods to fuse the multi-exposure images. The optimal result was selected as the reference image, and a convolutional neural network was designed to be trained on this dataset. Finally, a low-light image enhancer was obtained. The single low-light image enhancement method based on image fusion effectively solves the problem that image fusion requires multiple exposure images as input images, but the methods of Fu et al. [15] and Cai et al. [16] still lack physical mechanisms.

In view of the problems existing in the current methods, this paper proposes a low-light image enhancement method based on multi-exposure image generation. First, starting from the physical imaging mechanism, the relationship between exposure images is analyzed, and it is found that there is a relationship between images with different exposure times that is similar to the relationship between shadow and non-shadow images. Based on this, the orthogonal decomposition method [17] is first proposed for multi-exposure image fusion, that is, the image is decomposed into illumination components and illumination invariants using the orthogonal decomposition method, and images with different exposure times are generated by changing the illumination components. The generated images are then fused using the image fusion method to obtain an image with a high dynamic range. Since the generated image is close to the real image, the naturalness of the enhanced image obtained by fusion is also well maintained. At the same time, the multi-exposure images generated from a single image correspond to each other pixel by pixel, and there is no artifact in the fusion result, which also solves the problem that the camera needs to be fixed when shooting multi-exposure images. In addition, the method in this paper does not need to rely on a large amount of data for training and has good versatility.

2 Multi-exposure image generation and low-light image enhancement

The proposed method mainly consists of three parts: (1) Image orthogonal decomposition. The original image is decomposed into an illumination component and an illumination invariant. (2) Multi-exposure image generation. Multi-exposure illumination components are generated by changing the size of the illumination components, and then combined with the original illumination invariant to obtain a multi-exposure image; (3) Multi-exposure image fusion. Multi-exposure images are fused to obtain the final enhanced image. Figure 1 shows the framework of the proposed algorithm.

[1] [2] [3]
Reference address:Low illumination image enhancement method based on multi-exposure image generation

Previous article:Robot binary setting method and steps
Next article:Working principle of angle sensor Principle and application of angle sensor

Latest robot Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号