0 Introduction
Traffic accident injuries have increasingly become a global public hazard that threatens human life and safety. Timely discovery of traffic accidents and reporting to the police can effectively reduce the casualties of traffic accidents.
At present, reducing traffic accident injuries is mainly achieved by using traffic accident detection devices, which are mainly divided into vehicle detection devices based on magnetic frequency signals, spectrum signals and video signals. Various detection devices use cameras, ultrasonic waves or microwaves to detect traffic events, mainly processing macro traffic flow information. This indirect detection technology has the disadvantages of low recognition rate and long delay time, making it difficult to get timely and effective rescue after a traffic accident occurs.
Since a traffic accident will produce a loud collision sound, and the spectrum of collision sound is different from that of other sounds, by collecting and analyzing the sound signals around the vehicle to detect vehicle accidents, the accident scene information can be obtained in real time and an alarm can be issued. Therefore, it is better than the traffic flow analysis method in terms of immediacy, and the success rate of accident recognition can also be relatively improved. Yunlong Zhang proposed a method of using wavelet transform to analyze vehicle sounds to detect vehicle accidents, and achieved good recognition results. Chen Qiang and others from Jilin University used this method to analyze and classify vehicle noise, and can distinguish the collision information of different types of vehicles. However, the design of the above algorithms is based on computer data analysis, and no actual hardware device has been designed based on this theory, and the accuracy of the algorithm needs to be further improved. Therefore, it is of great practical value to design a vehicle collision sound detection hardware device with strong real-time performance and high accuracy.
This paper uses wavelet analysis and pattern recognition methods to analyze vehicle noise signals and designs a vehicle collision sound detection device based on
DSP
. This device can effectively detect vehicle collision events and realize automatic identification of traffic accidents. Compared with existing traffic accident detection devices, it has the advantages of high recognition rate, strong real-time performance, and low price.
1 Hardware Design
The principle block diagram of the collision detection device we designed is shown in Figure 1. First, a sound
sensor
is used
to collect various sound signals. The electrical signal output by the sensor is amplified by the amplifier circuit and transmitted to the
analog
signal input terminal of the sound acquisition chip. The sound acquisition chip converts the analog signal into A/D and sends it to the DSP module for further processing. The DSP module processes the collected sound information in real time to determine whether a vehicle collision accident has occurred. The memory module is connected to the DSP module to store the data to be processed and the solidified code data, and to provide temporary storage space for the DSP module to operate. The alarm module and the communication module communicate with the external rescue center. Once the DSP module detects a vehicle collision accident, the alarm module sends an alarm message to the outside. The functions of the main modules are introduced below.
1.1 Sound collection module
The sound collection module uses a capacitive sound sensor with a sampling frequency of 30Hz to 18kHz. Since the frequency of vehicle noise signals generally does not exceed 10kHz, the sound sensor can achieve good sampling. The sound sensor sends the collected analog signal to the amplifier circuit for amplification and then transmits it to the sound collection chip.
The sound acquisition chip uses
TLV320AIC23B
(AIC23 for short), which is a high-performance stereo audio codec chip from TI. It has a 48kHz bandwidth and can meet the acquisition requirements of sound signals including noise signals. AIC23 performs dual-channel stereo A/D conversion on the collected analog signal and can provide 16-bit, 20-bit, 24-bit and 32-bit sampling data at a sampling rate of 8kHz-96kHz. This system collects external sound signals at a sampling frequency of 32kHz, collects 32,000 sound data per second, and sets the length of the collected data to 16 bits, so that the analog signal becomes a 16-bit digital signal after A/D conversion. After the analog signal is converted into a digital signal, AIC23 transmits the data to the DSP module for further processing.
This system sets the MODE pin of AIC23 to 0, the control interface to I2C working mode, and the data transmission interface between AIC23 and DSP module uses DSP mode. In this way, the DSP module can control AIC23 to work together and receive the data collected by AIC23.
1.2 DSP module
The DSP module is the core of the whole system, completing the functions of audio signal acquisition, control, storage, processing and communication with the outside world. The DSP chip TMS320V-C5509 (VC5509 for short) produced by TI is selected. It is a 16-bit fixed-point DSP with extremely high cost performance, multiple high-performance computing units, a system clock of 144MHz, an instruction computing speed of up to 100MM
ACS, and provides a wealth of on-chip expansion interfaces.
VC5509 has two multi-channel buffered serial ports (McBSP). McBSP has the same b
asic functions as the standard serial interface, and expands the functions on the basis of the standard serial interface. The voice acquisition chip AIC23 used in this system is connected to DSP through McBSP, and its connection diagram is shown in Figure 2. Among them, CLKX is the sending clock, and CLKR is the receiving clock. They are both connected to the system clock BCLK of AIC23. FSX and FSR realize the frame synchronization of sending and receiving, corresponding to the LRCIN and LRCOUT pins of AIC23. The data sending pin DX and the data receiving pin DR are connected to DIN and DOUT of AIC23 respectively to complete the serial data sending and receiving operations.
VC5509 also contains 6 programmable DMA channels. The DMA controller can transfer data between internal memory, external memory and on-chip peripherals without CPU intervention. When the operation is completed, the DMA controller can send an interrupt request signal to the CPU. The system uses a DMA channel 0 to read data from the data acquisition module and write it to a specific location in the external memory. When the data acquisition is full, the DMA controller will generate an interrupt to control the DSP to execute the data processing program. The use of DMA reduces the number of system interrupts and significantly improves the system's operating speed.
The DSP module runs the detection software to analyze the sound data collected by the sound collection module and determine whether a collision has occurred. The DSP module also leads to an IO port to communicate with the alarm and communication module. The alarm and communication module receives the signal from this pin to determine whether a collision has occurred and whether to alarm.
1.3 Memory Module
VC5509 supports a unified addressing space. The total capacity of the on-chip memory is 320kB, including 128k×16bit RAM and 32k×16bit ROM. It can be expanded to a maximum of 8M×16bit off-chip memory space according to user needs. This system uses the HY57V64 chip, which is an
SDRAM chip with 4 1M×16bit logic arrays. The chip receives and stores the sound signal data transmitted by the DSP module. When the DSP module needs to process data, it will also read the corresponding data from a specific location of the chip.
1.4 Alarm module
The alarm module is equipped with GPS and
GSM modules to obtain location and speed information and communicate with the server. The DSP module obtains the vehicle's speed and acceleration information from the GPS module of the alarm module and adds the acoustic signal information for auxiliary calculation. The alarm module obtains real-time vehicle collision information from the DSP module. Once a collision signal is detected, the alarm module will alert the server.
2 Software and Algorithm Design
The system software we designed is a program running on DSP, controlling the operation of each system module and completing algorithm calculations. We use TI's CCS integrated development environment and program in C language and assembly language.
The software is first initialized and the operating parameters of VC5509 and AIC23 are configured. When configuring the phase-locked loop of the VC5509 chip, the system clock is set to 144kHz. When configuring McBSP, open McBSP0 of VC5509 and start it for input and output operations. Configure the DMA0 channel to work in compatible mode and stop data transmission when interrupted. Configure the working mode of AIC23 to DSP mode and use IIC to transmit data. Start AIC23 to sample the sound signal at a sampling rate of 32k.
After the initialization is completed, sampling detection is performed. After sampling detection, once it is found that the collected signal meets the frame condition, that is, when the length of the collected sound signal is sufficient for 1 second, the automatic sound detection algorithm is executed.
[page]
The automatic sound detection algorithm reads the data and makes a judgment. If a non-collision event is detected, the sampling detection continues to be performed to wait for the next second of data to be processed. At this time, the software is executing an empty loop; when the automatic sound detection algorithm detects a collision event, it transmits information to the communication module, and after the GPS module confirms the speed and position information, it alarms through the alarm module. The process of this software is shown in Figure 3.
The design of the automatic sound detection algorithm in the software flow chart is the core part, which will be introduced in detail below. Due to the different amplitude-frequency characteristics and phase-frequency characteristics of different sound wave signals, the amplitudes of different sound wave signals in various frequency bands also have certain differences. Therefore, the energy changes of each frequency component can be used to achieve target recognition.
The automatic sound detection algorithm includes four parts: sound signal acquisition and framing, feature extraction, feature dimension reduction, and feature classification. The specific implementation steps are as follows:
(1) Acquisition and framing. The acquired signal is divided into a frame every 2 seconds, with 1 second overlap between frames. For a chip with a sampling rate of 32k, only 65,536 points of a 2-second segment are processed each time, and there is a 1/2 repetition between two segments during the training phase. In this way, a set of data Datai (1≤i≤65535) is obtained.
(2) Feature extraction. Perform DWT transformation on each frame of signal data Datai (1≤i≤65535) to obtain frequency domain information, and then statistically analyze the energy distribution based on the obtained frequency domain information to use it as a feature for identifying traffic accidents. This algorithm uses DB1 wavelet. For each frame of signal, it first performs a layer of decomposition, then performs a two-layer complete decomposition of the high-frequency coefficients, and performs a 10-layer unidirectional decomposition of the low-frequency coefficients to obtain 18 groups of data. The characteristic component F=[E1, E2…E18] is calculated, and the calculation formula of En is as follows:
, where N is the length of Cn.
(3) Feature dimensionality reduction. The signal quantity after feature extraction is reduced in dimensionality. Based on the extracted feature component F, this algorithm uses an outlier detection algorithm based on principal component analysis (PCA) to detect traffic accident collision sounds. The original feature F is transformed to obtain
the formula
: where H is the projection matrix obtained by the PCA method.
(4) Feature classification. Collect samples of sound signals around the vehicle during normal operation and traffic accidents, and train and construct a classifier to classify the sounds during driving. The classifier is intended to output two types of classification results: one is the normal operation sound, and the other is the collision sound of a major traffic accident. The judgment conditions are:
Among them,
is the feature component projection of the training sample set. n represents
the maximum number of deviations from the given interval Ii allowed. When n is greater than a certain threshold, it is a collision, otherwise it is not a collision.
3 Experimental results and analysis
The total number of experimental samples used in the system experiment is 200, which are divided into two categories: collision samples and non-collision samples, with 100 samples in each category. The collision samples are collected from the collision tests of vehicle manufacturers, and the non-collision samples are collected from various common sound signals in daily life. The length of the collision sample is 10s, which contains the sound of the complete vehicle collision process and is mixed with common noises such as braking. The length of the non-collision sample is 20s, which is divided into several types of sounds such as natural environment, music, and voice. Among the collision samples, 20 are used as training samples for the algorithm, and the remaining 80 are used to detect the effect of the algorithm. The spectrum of ordinary sound is shown in Figure 4 a), and the spectrum of typical collision sound samples is shown in Figure 4 b).
Our collision sound detector was tested in a simulated environment to restore the real scene as much as possible. The collision signal collected in the real scene was repeatedly tested using a low-distortion
amplifier
. The results were compared with the experimental results in the literature. The overall success rate is the ratio of the total number of samples judged correctly to the total number of experimental samples. The experimental results are shown in Table 1.
It can be seen from the experimental results that the experimental results are very accurate for both collision samples and non-collision samples, which shows that the design of this algorithm is relatively reasonable and can achieve the purpose of collision sound classification under relatively small interference. Compared with the results mentioned in the literature, the accuracy of both collision samples and non-collision samples has been improved.
4 Conclusion
The vehicle collision alarm device uses
TMS3205509
chip for signal processing and TLV320AIC23B as acquisition chip, which is small in size and low in cost. This device uses frame division to perform pattern recognition calculation on the acoustic signal to realize timely alarm of vehicle collision. The experimental results show that this system has high reliability, short delay, and can send out alarm signals in time. The application of this system can improve the safety factor of motor vehicle drivers and passengers, thereby reducing the traffic accident casualty rate of drivers and passengers, and has good application prospects.