1 Introduction
With the continuous development of modern control technology, the intelligent requirements for lighting control are getting higher and higher. The use of intelligent lighting control systems can not only provide a variety of artistic effects for lighting, but also bring the benefits of saving energy and reducing operating costs.
In large lighting occasions such as libraries, large shopping malls, indoor sports fields, corridors, etc., the areas can often be divided into occupied areas and unoccupied areas. If the lighting brightness of all areas is the same, the lighting in the unoccupied areas will have no effect at all, which is invalid lighting. If the intelligent lighting control system can detect the position of personnel and dynamically determine the occupied areas and unoccupied areas, it can implement normal higher brightness lighting in the occupied areas, while reducing the illumination or turning off the lamps in the unoccupied areas. As the human body moves, the system dynamically adjusts the effective lighting area to achieve the purpose of reducing invalid lighting and saving energy while ensuring good lighting effects.
Correct judgment of the human body position is the first prerequisite for realizing intelligent lighting. At present, some people have proposed methods to determine the human body position by using infrared and laser detection, radio frequency card combined with identity recognition, floor pressure sensor and other technical means. However, when these methods are applied in occasions with large lighting areas, there are problems such as no suitable sensor installation position and complex wiring. Therefore, it is difficult to implement them and the reliability is difficult to further improve. In fact, it is very popular to install video surveillance probes in the above-mentioned large lighting occasions. If these video surveillance images can be fully utilized, combined with digital image technology, human body images can be extracted from video surveillance images, and human body positions can be determined, then real intelligent lighting can be realized. This paper uses digital video image target positioning and tracking technology and PLC-Bus technology to build an intelligent lighting control system, directly determine the human body position from the video image, and the lamp opening and closing control signal is directly transmitted through the power line, without the need for additional wiring. Therefore, it can effectively overcome many difficult problems in sensor installation and wiring, achieve automatic adjustment of illumination, automatic switching of lamps and good control effects of local area lighting, and its reliability is also very high.
2. Composition of Intelligent Lighting Control System
The composition of the entire system is shown in Figure 1. From a functional perspective, it consists of three parts: image acquisition module, image processing module and lighting control module.
Figure 1 Composition of intelligent lighting control system
2.1 Image acquisition module
The image acquisition module is mainly composed of a camera and an optical glass lens . The camera uses the Hyundai HV7131R from South Korea, which is one of the better mainstream products. The HV7131R uses a 0.3 μm CMOS process, with an effective pixel of 300,000 and a power consumption of less than 90 mW. It has functions such as exposure control, gain control and white balance processing, and a maximum frame rate of 30fps@VGA. By setting the internal registers of the HV7131R through the standard I2C interface , the exposure time, resolution, frame rate, RGB gain, mirroring, etc. of the image can be adjusted, and 10-bit RGB raw data can be output.
The optical glass lens uses a telephoto lens with a viewing angle of 20° and a focal length of several meters to tens of meters. The camera should be installed to ensure that the entire monitoring area can be observed as much as possible. Therefore, during the installation position and angle adjustment of the camera, the monitoring area is generally set to start from the bottom of the image. In addition, in order to avoid serious adhesion when extracting human images, the camera should have a maximum downward angle.
2.2 Image Processing Module
This module consists of DSP and data buffer. DSP mainly adopts TMS320LF2407 from TI. The main functions of DSP are: self-powering, completing initialization, setting the camera register through I2C interface, pre-processing the monitoring image of the lighting area obtained by the image acquisition module, completing the human body edge extraction in the image, human body position judgment calculation and making lighting control decisions, etc.
2.3 Lighting Control Module and PLC Bus Technology
The lighting control module adopts a distributed control method to achieve decentralized control and centralized management of lighting fixtures in the entire monitoring area. The upper DSP makes lighting equipment control decisions based on the image analysis results, and the lower lighting controller receives communication instructions from the upper computer to control the switch of the corresponding lamps and has a dimming function. The control instructions issued by the upper DSP are transmitted to the lower lighting controller for execution through the PLC Bus.
PLC Bus technology is a new power line communication technology developed by ATS Power Line Communications Co., Ltd. (ATS., CO) in the Netherlands in recent years. The biggest advantage of this technology is that the control signal is transmitted through the power line, so there is no need to arrange additional control lines, saving a lot of wire consumption, and the control system is easy to install and maintain.
The PLC Bus system mainly consists of three parts, namely the transmitter, the receiver and the system supporting equipment. The main function of the transmitter is to transmit the PLC Bus control signal to the receiver through the power line, and to indirectly control the lights and electrical equipment by controlling the receiver. The main function of the receiver is to receive the PLC Bus control signal from the power line and execute the relevant control commands to achieve the purpose of controlling the lights and electrical appliances. The system supporting equipment includes signal converters, three-phase couplers, absorbers, etc., which are mainly used to cooperate with the transmitter and receiver equipment to assist in achieving the control purpose.
PLC Bus adopts Pulse Posit ion Modulation (PPM) pulse phase modulation method, uses the sine wave of the power line as the synchronization signal, and transmits signals by sending instantaneous electrical pulses in four fixed time sequences.
On a 50Hz power line, 200 bits of data can be transmitted in 1 second. This communication rate is not enough to transmit broadband data like a computer , but it is sufficient for transmitting actions or command communications.
Due to the special nature of the PPM communication method of the PLC Bus, the receiver can easily and simply restore the PLC Bus code. In the PLC Bus, the address codes used to receive data are NID (Network ID) and DID (Destination ID). NID and DID are 8 bits each, and the two together can form up to 216 different addresses and control 216 different devices.
Main features of PLC Bus:
(1) No wiring required, plug and play.
PLC Bus technology mainly transmits control signals through power lines, so there is no need for rewiring. It is suitable for intelligent control projects in all lighting places that have been built or are being installed.
(2) Super speed, instant control and immediate display.
The signal transmission speed of PLC Bus is 10 complete instructions per second. On average, each instruction is completed within 0.1 second from transmission to execution, which means that the control is almost immediate.
(3) Two-way communication and status feedback.
The hardware, software and protocol of the PLC Bus product allow two-way communication, which allows the controlled lamps to truly feedback the switch status signal, so as to determine whether the control command is actually executed correctly. The cost is only about 40% higher than the single receiving component or transmitting component of X-10, which is cost-effective.
(4) Good compatibility and wider application.
PLC Bus technology devices are compatible with X-10, CE Bus and LonWorks devices without any signal conflicts.
The control of lamps can be divided into dimming and non-dimming. Dimming can be achieved by using OSRAM's dimmable electronic ballast to dim the fluorescent lamp. The controller outputs a 0~10V DC signal as the control signal of the electronic ballast to achieve a luminous flux adjustment range of 1%~100% for the fluorescent lamp. The dimming of incandescent lamps can be achieved by using a phase-shift trigger and a random solid-state relay. When a control signal is applied to the control end of the random solid-state relay, the AC load can be turned on immediately. When this control signal is a phase-shifted pulse signal synchronized with the AC power grid, the load end can achieve a stable voltage adjustment within a range of 180°. According to the size of the control voltage, the phase-shift trigger generates a wide pulse with a phase shift of 180° at the output end, which is synchronized with the power grid voltage and double the power grid frequency, to drive the random solid-state relay to achieve the purpose of phase-shift voltage regulation. Therefore, the random solid-state relay can be used alone to connect or disconnect the lighting circuit; when used in conjunction with the phase-shift trigger, the dimming of the incandescent lamp can be achieved.
3 Human target dynamic positioning technology in intelligent lighting control
Video surveillance images are two-dimensional projection images of three-dimensional lighting area scenes . Although they cannot fully reflect the real three-dimensional scene, there is a certain projection relationship between the two. When the three-dimensional scene changes, the video image will also change accordingly. In addition, the scene of the continuous video stream is continuous. If there is no human motion in the lighting area, the change between continuous frame images is very small. On the contrary, human motion will cause frame difference. In the dynamic detection of human targets against a static background in the lighting area, the dynamic human targets can be detected by frame change detection (Change Detection).
Human dynamic target detection based on static background is mainly divided into three parts: image preprocessing, human dynamic target extraction, and human position determination.
3.1 Image Preprocessing
The digital image of the illuminated area captured by the camera contains a lot of noise, so it must be filtered out first. There are many ways to filter out noise. Median filtering is a commonly used nonlinear signal processing technology. It uses a sliding template to slide point by point on the image, sorts the grayscale values of each point in the template, and uses the grayscale value in the middle as the pixel grayscale value of the center point of the template. This method has a good inhibitory effect on random noise in the image and can better protect the contour and edge of the image. In addition, median filtering has the characteristics of not affecting step signals, keeping the spectrum unchanged after filtering, and having a strong removal effect on salt and pepper noise on the image.
The size of the digital image of the illuminated area is M×N pixels, and the gray value is f (x, y). The four-neighborhood median filter can achieve good results. The gray value of the filtered image is:
3.2 Object Change Detection
When a human target appears or moves in the monitoring field of view, the grayscale value of the pixels between consecutive frames will change, that is, frame difference will be generated. The frame difference of the target corresponding area is larger than that of the background area. Therefore, calculating the size of the frame difference to determine whether there is a change is a common method for target change detection. The simplest algorithm is to use the absolute value method of frame difference.
For the detection image sequence f (x, y, t), the calculation formula for the cumulative number of changed pixels is:
Where: Dk is the cumulative number of pixels that have changed, f(x, y, t1) and f(x, y, t2) represent the grayscale values of the pixel point (x, y) at time t1 and t2 of the image sequence, respectively, which is the addition item of adaptive sensitivity to the illumination of the adjacent frame image, α is the suppression coefficient, and N is the number of pixels in the detection area. T is the grayscale threshold, and its size determines the sensitivity of dynamic target detection.
The conditions for determining whether there is a target change are:
Here, D is the set threshold. The algorithm of this method is simple, and the influence of changes in lighting conditions is considered in the judgment conditions, so it has a certain adaptability to lighting changes. At the same time, it also overcomes the misjudgment caused by the interference of smaller moving targets to a certain extent, and improves the accuracy of detection.
3.3 Image edge extraction
The most basic feature of an image is the edge, which refers to the collection of pixels with relatively large grayscale changes around the image. It is an important basis for human target detection and segmentation. Image edge detection has always been a hot spot and difficulty in image processing. This is mainly because both edges and noise are high-frequency signals, and it is difficult to choose between them. Among the current edge detection algorithms, the Sobel image edge detection algorithm, as a representative of the classic algorithm, has been widely used in many fields due to its small amount of calculation and fast speed.
Since images will have sudden changes in grayscale near the edge, the Sobel edge detection method is based on the grayscale of the original image. It detects the edge by examining the change in grayscale of each pixel in a certain area and using the maximum value of the first-order derivative near the edge. The mathematical description of its gradient amplitude is:
3.4 Image segmentation and human body position determination
In large lighting occasions with intelligent control, due to the presence of lighting areas and non-lighting areas, the monitoring images are often images with uneven backgrounds. When analyzing the characteristics of human targets, it can be found that the background grayscale value difference of the entire field of view may be very large, so a single threshold cannot be used for segmentation. If a single threshold is used, the grayscale value of the background pixel is segmented out due to the uneven background in the lighting control area. But in fact, the reason why the human eye can see the target clearly is that there is a certain grayscale difference between the target and the background in the local range. Therefore, based on this principle, we divide the entire lighting monitoring image field of view into equal and small enough parts, and the grayscale change of each part will be small. First, divide the entire field of view into many equal small blocks, calculate the average value, maximum value, and minimum value of each small block separately, and calculate the corresponding threshold and threshold difference, and use the respective thresholds to segment the small blocks. If there is a target, record the position and threshold difference of the target in the block. When the entire field of view is processed, find the target with the largest threshold difference in each small block as the candidate target point, and perform window tracking at this point. Since the window is already small, the influence of uneven background is not significant.
After the human figure is segmented, the centroid position can be calculated using the following formula:
Among them, m and n are the window sizes, and f (x, y) is the binarized image.
The monitored lighting projection area is divided into a two-dimensional array. After the centroid position of the human body is calculated, the specific position of the centroid of the human image in the two-dimensional array can be determined. Then, corresponding lighting control decisions can be made to control the opening and closing and brightness of the lamps corresponding to the human body position.
4 Conclusion
Since the input signal of the intelligent lighting control system is obtained from the video image of the lighting area using digital image processing technology, and the output control signal is transmitted by the power line carrier communication PLC Bus, the intelligent lighting control system based on target positioning and PLC Bus technology has the obvious characteristics of simple wiring, high system reliability and easy maintenance.
Moreover, the control decision can be designed to be very user-friendly, and the lighting fixtures can be automatically adjusted for illumination, automatically switched on and off, and controlled for lighting in local areas where people are present. The system can provide a comfortable, scientific, and economical lighting environment, and is an important development direction for advanced lighting control.
Previous article:Behavioral Simulation of Pipeline ADC
Next article:Analysis of Thermoelectric Characteristics of Solar Cells under Concentrated Light Conditions
Recommended ReadingLatest update time:2024-11-16 16:18
- Popular Resources
- Popular amplifiers
- MathWorks and NXP Collaborate to Launch Model-Based Design Toolbox for Battery Management Systems
- STMicroelectronics' advanced galvanically isolated gate driver STGAP3S provides flexible protection for IGBTs and SiC MOSFETs
- New diaphragm-free solid-state lithium battery technology is launched: the distance between the positive and negative electrodes is less than 0.000001 meters
- [“Source” Observe the Autumn Series] Application and testing of the next generation of semiconductor gallium oxide device photodetectors
- 采用自主设计封装,绝缘电阻显著提高!ROHM开发出更高电压xEV系统的SiC肖特基势垒二极管
- Will GaN replace SiC? PI's disruptive 1700V InnoMux2 is here to demonstrate
- From Isolation to the Third and a Half Generation: Understanding Naxinwei's Gate Driver IC in One Article
- The appeal of 48 V technology: importance, benefits and key factors in system-level applications
- Important breakthrough in recycling of used lithium-ion batteries
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- Summary: Reading notes on "Operational Amplifier Parameter Analysis and LTspice Application Simulation"
- How to use CPLD to collect asynchronous signals
- Share: Where should the inductor be placed on the power supply PCB?
- [Open Source] GPIO Experiment Tutorial - Crazy Shell ARM Dual Processor Development Board Series
- Does the embedded industry strictly restrict college education?
- BAW filters help 5G achieve high-quality communications and higher efficiency
- [RISC-V MCU CH32V103 Review] Hardware IIC drive OLED (finally)
- 2 mistakes in the RSL10 official getting started guide
- ADC10 multi-channel input of msp430g2553
- 【TI Recommended Course】#Texas Instruments Field Transmitter Output Interface/Fieldbus Solution#