Low-light images can reduce the robustness of many computer vision tasks, seriously affecting many visual tasks in the field, such as image recognition and target tracking. In order to obtain enhanced images with more detailed information and a larger dynamic range, a low-light image enhancement method based on multi-exposure image generation is proposed. By analyzing multi-exposure images taken in real life, this method finds that there is a linear relationship between the pixel values of images with different exposure times, so that the idea of orthogonal decomposition can be applied to multi-exposure image generation. Multi-exposure images are generated according to the physical imaging mechanism and are closer to real images. After decomposing the original image to obtain illumination invariants and illumination components, different illumination components are generated by designing an adaptive algorithm, and then synthesized with the illumination invariant to obtain multi-exposure images. Finally, the multi-exposure image fusion method is used to obtain an enhanced image with a larger dynamic range. The fusion result is consistent with the input image, and the final enhanced image can effectively retain the color of the original image with high naturalness. Experiments were conducted on a public dataset of real low-light images and compared with existing advanced algorithms. The results showed that the structural similarity between the enhanced image obtained by this method and the reference image was improved by 2.1%, and the feature similarity was improved by 4.6%. The enhanced image is closer to the reference image and more natural.
Images captured under low light conditions or when the camera exposure time is insufficient are called low-light images. Low-light images usually have the characteristics of low brightness, low contrast, and blurred structural information, which brings difficulties to many robot vision tasks, such as face recognition, target tracking[1], autonomous driving[2], feature extraction[3], etc. Low-light image enhancement methods can not only improve the visual effect of the image, but also improve the robustness of subsequent robot vision task algorithms, which has important practical application value.
According to whether they need to rely on a large amount of data for training, the existing low-light image enhancement algorithms can be divided into two categories: traditional methods and methods based on. Among the traditional low-light image enhancement methods, the histogram-based method [4-5] improves the contrast of the image by adjusting the histogram of the image to achieve the purpose of enhancing the low-light image. This type of method is simple and efficient, but lacks a physical mechanism and often leads to over-enhancement or under-enhancement of the image, and the image noise will also be significantly amplified. The method based on Renex theory [6] first decomposes the image into illumination component and reflection component, and then enhances them separately. Wang et al. [7] designed a low-pass method to decompose the image into reflection image and illumination image, and used a double logarithmic transformation to enhance the image to balance naturalness and image details. Guo et al. [8] first took the maximum value of the RGB three channels of the low-light image to obtain an initial illumination image, then corrected the initial illumination through structural prior information, used gamma correction to adjust the image brightness, and then synthesized the adjusted illumination image with the reflection image to obtain the final enhancement result. Ren et al. [9] proposed a noise-suppressed sequence evaluation model to estimate the illumination component and the reflection component separately. In this noise-suppressed sequence decomposition process, each component is spatially smoothed and the weight matrix is cleverly used to suppress noise and improve contrast. Finally, the estimated reflection component is combined with the gamma-corrected illumination component to obtain an enhanced image, ultimately achieving the purpose of low-light enhancement and joint denoising.
Deep learning-based methods [10-13] have achieved good results in low-light image enhancement by training on a large amount of data. Lore et al. [10] first proposed a deep autoencoder for contrast enhancement and noise removal to enhance low-light images. Wei et al. [11] combined the Retinex model with depth for low-light image enhancement. Jiang et al. [12] used generative adversarial methods to implement a low-light image enhancement model that does not require paired training data for training. Guo et al. [13] converted the low-light image enhancement task into a specific curve estimation task with a deep network. The training process of these deep learning-based methods usually takes a lot of time and computing resources. In addition, the effectiveness of these methods depends largely on the training data. Inaccurate reference images can affect the training results. For example, in real normal light images, due to uneven lighting, there may be problems of overexposure of local high-light areas or underexposure of local low-light areas.
In low-light image enhancement, uneven illumination is also a problem that needs to be solved. For local low-light images, increasing the image brightness too much will cause the high-light area of the image to be overexposed, while insufficient brightness increase will not be able to show the image details in the low-light area. Thanks to the advancement of camera equipment, it is possible to fix the camera equipment and obtain images with different exposure times in a short period of time, and then fuse a group of images to obtain an image with a larger dynamic range. Wang et al. [14] designed a smooth multi-scale exposure fusion algorithm based on edge information preservation in the YUV color space, which can simultaneously retain the details in the high-light area and the low-light area of the scene. In order to compensate for the detail information lost during the fusion process, a vector field construction algorithm was designed to extract visible image details from the vector field, and this method can avoid color distortion during the image fusion process. Although the image fusion method can effectively improve the image dynamic range, it requires a group of images with different exposure times to be obtained in advance, and cannot enhance a single low-light image. Shooting dynamic scenes or camera shaking during shooting will make it difficult to align the captured images, which will lead to artifacts in the fusion results.
In order to apply the image fusion method to low-light image enhancement to improve the image dynamic range, it is necessary to first generate a set of information for fusion based on a single image. At present, there are some methods that use the idea of image fusion for low-light image enhancement. Among them, Fu et al. [15] first decomposed the image into an illumination image and a reflection image using an illumination estimation algorithm based on morphological closure, and then used the Sigmoid function and adaptive histogram equalization algorithm to process the illumination image to obtain an illumination image with enhanced brightness and an illumination image with enhanced contrast. The two enhanced illumination images were fused and then synthesized with the reflection image to obtain the final enhanced image. Cai et al. [16] collected 589 sets of multi-exposure images and used 13 existing methods to fuse the multi-exposure images. The optimal result was selected as the reference image, and a convolutional neural network was designed to be trained on this dataset. Finally, a low-light image enhancer was obtained. The single low-light image enhancement method based on image fusion effectively solves the problem that image fusion requires multiple exposure images as input images, but the methods of Fu et al. [15] and Cai et al. [16] still lack physical mechanisms.
In view of the problems existing in the current methods, this paper proposes a low-light image enhancement method based on multi-exposure image generation. First, starting from the physical imaging mechanism, the relationship between exposure images is analyzed, and it is found that there is a relationship between images with different exposure times that is similar to the relationship between shadow and non-shadow images. Based on this, the orthogonal decomposition method [17] is first proposed for multi-exposure image fusion, that is, the image is decomposed into illumination components and illumination invariants using the orthogonal decomposition method, and images with different exposure times are generated by changing the illumination components. The generated images are then fused using the image fusion method to obtain an image with a high dynamic range. Since the generated image is close to the real image, the naturalness of the enhanced image obtained by fusion is also well maintained. At the same time, the multi-exposure images generated from a single image correspond to each other pixel by pixel, and there is no artifact in the fusion result, which also solves the problem that the camera needs to be fixed when shooting multi-exposure images. In addition, the method in this paper does not need to rely on a large amount of data for training and has good versatility.
2 Multi-exposure image generation and low-light image enhancement
The proposed method mainly consists of three parts: (1) Image orthogonal decomposition. The original image is decomposed into an illumination component and an illumination invariant. (2) Multi-exposure image generation. Multi-exposure illumination components are generated by changing the size of the illumination components, and then combined with the original illumination invariant to obtain a multi-exposure image; (3) Multi-exposure image fusion. Multi-exposure images are fused to obtain the final enhanced image. Figure 1 shows the framework of the proposed algorithm.
Previous article:Robot binary setting method and steps
Next article:Working principle of angle sensor Principle and application of angle sensor
- Popular Resources
- Popular amplifiers
- Using IMU to enhance robot positioning: a fundamental technology for accurate navigation
- Researchers develop self-learning robot that can clean washbasins like humans
- Universal Robots launches UR AI Accelerator to inject new AI power into collaborative robots
- The first batch of national standards for embodied intelligence of humanoid robots were released: divided into 4 levels according to limb movement, upper limb operation, etc.
- New chapter in payload: Universal Robots’ new generation UR20 and UR30 have upgraded performance
- Humanoid robots drive the demand for frameless torque motors, and manufacturers are actively deploying
- MiR Launches New Fleet Management Software MiR Fleet Enterprise, Setting New Standards in Scalability and Cybersecurity for Autonomous Mobile Robots
- Nidec Drive Technology produces harmonic reducers for the first time in China, growing together with the Chinese robotics industry
- DC motor driver chip, low voltage, high current, single full-bridge driver - Ruimeng MS31211
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- Summary and analysis of interference problems in circuit design
- Does the dummy pad have any impact on the electrical characteristics of the PCB?
- [HC32F460 Development Board Review] NO.2 Familiarity with the development process and different peripheral functions
- Classification of RFID readers and their advantages
- [Repost] Correctly select low noise amplifier
- NTC thermistor driver for MicroPython
- How does Cyclone2's ep2c8 provide clock to external ADC?
- Comparison of Autoliv 2nd and 3rd generation night vision cameras
- If I use a battery to power a DCDC circuit, how can I be sure that the battery has enough power?
- Standard USB keyboard data packet