Research on intelligent lighting control system based on wavelet transform and image fusion

Publisher:MindfulCreatorLatest update time:2012-08-11 Source: 中国照明网 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

1 Introduction

Since the beginning of the 21st century, my country's buildings have entered an era of highly intelligent development. New intelligent buildings and modern residential communities can no longer meet their higher standards with traditional lighting control methods. Traditional lighting methods are simple, effective, and intuitive, but they rely too much on the personal ability of the controller, the control is relatively decentralized and cannot be effectively managed, and its timeliness and automation are too low. Although the subsequent automatic lighting control mode solves the problems of relatively decentralized control and ineffective management of traditional methods and realizes the automation of lighting control, it cannot realize the dimming control function.

At present, foreign products such as Niko's intelligent lighting control system can preset various scenes for lighting control and are widely used in office buildings, hotels, stadiums and other places, but they have disadvantages such as high price, relatively complex operation, and high requirements for management personnel. There are still few related lighting intelligent control systems used in residential areas and general public places in China.

In order to solve the shortcomings of the traditional infrared + light sensor lighting control system, such as requiring more sensors, high requirements for layout positions, large engineering construction and wiring, this paper proposes an illumination control method that uses dynamic and static monitoring (infrared, voice control) + digital image information fusion. The collected image information is fused, the fused image is divided into grayscale, and the grayscale average value of each area is compared with the preset grayscale value to adjust the scene illumination.

2. Fusion technology of motion and static monitoring sensor data and CCD digital image information

The control system block diagram of the dynamic and static monitoring and CCD digital image information fusion technology in the building intelligent lighting system is shown in Figure 1.

The basic principle is: observe whether there is anyone walking through motion detection technology. If no one is walking, turn off the lighting; if there is someone, analyze the multiple digital images collected, compare the image grayscale average with various preset standard values, and calculate the illumination model of the ambient illumination field. If it is within the error range allowed by the preset mode of the lighting system, there is no need to adjust the illumination, otherwise, the illumination needs to be adjusted.

Image fusion refers to the process of obtaining a composite image by using a certain fusion technology after denoising, temporal registration, spatial registration and resampling of images of the same scene obtained by different sensors or the same scene obtained by the same sensor at different times. By fusing multiple sensor images, the limitations and differences in geometry, spectrum and spatial resolution of a single sensor image can be overcome, and the image quality can be improved, which is conducive to the location, identification and interpretation of physical phenomena and events. The specific process is shown in Figure 2.

1.jpg

2.jpg

3 Image fusion technology based on wavelet transform

3.1 Preprocessing

In the intelligent lighting system, when the motion monitoring finds someone walking, the CCD camera will collect image information of the corresponding area. However, during the image acquisition process, due to the influence of various factors (such as the position speed of the sensor, light intensity, random noise, etc.), the actual image often contains the characteristics of the above factors. Therefore, before realizing image fusion, it is necessary to pre-process the different images obtained by the sensor, including image correction, enhancement, smoothing, filtering, registration, etc.

As shown in Figure 3, the indoor image captured by the CCD camera is displayed after image correction, filtering and registration preprocessing.

It can be seen from Figure 3 that due to factors such as light intensity, noise and interference, the flower pot on the right in Figure 3 (a) is a little blurry, and the door in Figure 3 (b) is a little blurry. The information provided by such images is not conducive to the recognition of the intelligent lighting system. Therefore, the following wavelet fusion technology can be used to fuse the source image information.

3.jpg

3.2 Wavelet transform fusion

Inspired by Burt and Adelson's pyramidal image decomposition and reconstruction algorithm, Mallat proposed the Mallat fast algorithm for wavelet transform. According to the two-dimensional Mallat algorithm, each preprocessed CCD image is decomposed into two dimensions.

The image size collected by the CCD camera in this paper is 351 × 260. The number of decomposition layers is set to 3, and the decomposition is performed at scale k-1 according to the following Mallat decomposition formula:

4.jpg

In the formula,

They represent the low-frequency component, horizontal high-frequency component, vertical high-frequency component and diagonal high-frequency component of the preprocessed CCD image at a resolution of 351 × 260. The low-frequency component reflects the approximate and average characteristics of the CCD image and concentrates most of the energy information of the image. As shown in Figure 4, it is a schematic diagram of wavelet decomposition of the CCD image.

In the wavelet transform domain of the two CCD images, the horizontal, vertical and diagonal components are fused respectively. The high-frequency coefficients of the two CCD images are compared at each scale j (j = 1, 2, 3), and the coefficients with larger absolute values ​​at the corresponding positions are retained as important wavelet coefficients, that is,

Respectively represent the wavelet coefficients of the two CCD images at each scale component.

The approximation coefficients C1J and C2J of the two CCD images after wavelet transform are processed. Due to various factors when the intelligent lighting system collects images, the images collected by the CCD camera are blurred in some parts. The blurred image means that its detail information (or high-frequency information) is lost more. In contrast, its overall information (or low-frequency information) is better preserved. Therefore, the difference between the approximation coefficients of the two CCD images after wavelet decomposition is much smaller than the difference between the wavelet coefficients. Therefore, the approximation coefficient after fusion can be determined.

Using all the wavelet coefficients obtained above and the approximation coefficients of multiple CCDs collected by the intelligent lighting system, a two-dimensional wavelet inverse transform is performed, and a reconstruction-type image fusion image is obtained. The fusion process is shown in Figure 5.

5.jpg
3.3 Evaluation indicators of fusion results
The effect of CCD image fusion is particularly important for the subsequent work of the intelligent lighting system. This paper proposes the use of wavelet transform method to fuse the collected images. The fusion effect can be evaluated from three indicators: grayscale mean, standard deviation and information entropy.
3.3.1 Grayscale average
The size of the fused image is still 351 × 260, and its grayscale average is:
Where pi is the ratio of the number of pixels with gray value i to the total number of pixels. Information entropy represents the richness of information contained in the CCD image. If pi is a constant for any gray level, then the information entropy can reach its maximum value. If the fused image obtained by a certain method has a larger entropy value, it means that this method is better; otherwise, it means that the method is worse.
3.4 Image grayscale average and its stability
After the image captured by the CCD camera is fused by wavelet, the grayscale average value of the fused image is extracted for the intelligent lighting system to call. If mean (A) is the grayscale average value of the fused CCD image, then:
If the fused CCD image is divided into four areas, as shown in Figure 6, each area within the illumination range becomes four small areas as shown in Figure 7:
The sizes of the four regions are all 175 × 130, and their grayscale averages are
Some operations on images, such as adjusting brightness, resampling, color dithering, smoothing, adding noise, compression, etc., can be attributed to a disturbance of the image, that is, stability. Let A' be the image A after being disturbed by E, that is, A' = A + E. According to the mean value inequality, we have
This can be used to determine the stability of the image after fusion.
3.5 Comparison with preset values
The grayscale average value of the fused image is compared with the preset value. If the average value is within the error range of the preset value, the intelligent lighting system does not make any adjustments; if the average value exceeds the error range of the preset value, the intelligent lighting system makes corresponding adjustments to adjust it within the illumination error range of the corresponding mode.
4 Experimental data analysis
This paper simulates the image fusion of the two indoor CCD images mentioned above, and extracts the relevant information of the fused image used by the intelligent lighting system. The wavelet transform fusion algorithm proposed in this paper is used, and the experimental fusion image effect comparison is shown in Figures 8, 9, and 10.
This simulation experiment compares the fusion algorithm proposed in this paper with three other different fusion algorithms. The data comparison is shown in Table 1:
From the data in Table 1, it can be seen that the fusion image obtained by the fusion algorithm proposed in this paper is clearer, and the information entropy values ​​obtained by the other three fusion algorithms are lower than the information entropy value obtained by the fusion algorithm proposed in this paper. The wavelet transform fusion algorithm can obtain richer time domain and frequency domain information, effectively retain the detailed information of the lighting area, and provide more comprehensive data information for the intelligent lighting system.
5 Conclusion
This paper proposes to use dynamic and static monitoring and image fusion technology to control the lighting of the intelligent lighting system. Image fusion mainly uses the wavelet transform algorithm to process the collected images. Experiments show that this fusion algorithm has better fusion effect than other fusion algorithms, retains necessary information, suppresses unnecessary information, and provides good image information for the subsequent work of the intelligent lighting system. The system combines image fusion technology and can adjust the lighting in real time according to the actual situation on site, personnel changes, and weather factors (rainy, foggy, snowy), etc., while ensuring sufficient illumination, and correspondingly reducing the energy consumption of the intelligent building lighting system.
Reference address:Research on intelligent lighting control system based on wavelet transform and image fusion

Previous article:Design of smart home lighting control system based on artificial immune system and ZigBee (Part 2)
Next article:Application Analysis of LED Driver Power Supply

Latest Power Management Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号