Design of digital hearing aid based on TMS320VC5416DSP

Publisher:meilidaowlLatest update time:2010-07-05 Source: 现代电子技术Keywords:TMS320VC5416DSP Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

0 Introduction

With the development of society and people's increasing attention to hearing-impaired patients, the development of hearing aids has gradually attracted people's attention. However, due to the different causes of hearing-impaired patients, there are great differences in their hearing loss, which makes each patient have different requirements for hearing aid compensation. At present, modern hearing aid technology has entered the era of full-digital hearing aids. At the same time, various digital signal processing algorithms that effectively improve the performance of digital hearing aids have also received more attention. Here, a digital hearing aid design based on TMS320VC5416 is proposed to meet the hearing needs of hearing-impaired patients.

l System composition and working principle

1.1 System composition

Based on the technical requirements of hearing aids, TI's C54X series product TMS320C5416 (hereinafter referred to as C5416) and digital encoder TLV320AIC23 (hereinafter referred to as AIC23) are selected.

Digital encoder AIC23 is a high-performance stereo audio Codec chip launched by TI. A/D conversion and D/A conversion components are integrated inside the chip, using advanced ∑-△ oversampling technology and built-in headphone output amplifier. The operating voltage of AIC23 DSP Codec is compatible with the core and I/O voltage of C5416, and can achieve seamless connection with C54x serial port. It has very low power consumption, making AIC23 an ideal audio analog device that can be well applied in the design of digital hearing aids.

The system structure is shown in Figure 1, which mainly includes DSP module, audio processing module, JTAG interface, storage module and power module, etc. The analog voice signal is input into AIC-23 through MIC or IANE IN, and then input into C5416 through MCBSP serial port after analog/digital conversion. After processing and compensation by the actual required algorithm, the voice signal required by the hearing-impaired patient is obtained, and then the voice signal is output through the speaker or earphone after AIC23 digital/analog conversion.

System Structure

1.2 Interface design between C5416 and AIC23

Figure 2 is the interface schematic diagram between C5416 and AIC23. Since the sampling output of AIC23 is serial data, it is necessary to coordinate the serial transmission protocol of the matching DSP. MCBSP is the most suitable for voice signal transmission. Connect the 22nd pin MODE of AIC23 to a high level to receive the SPI format serial port data from the DSP. The digital control interface (SCLK, SDIN, CS) is connected to MCBSP1. The control word has 16 bits in total and is transmitted from the high bit. The digital audio ports LRCOUT, LRCIN, DOUT, DIN, and BCLK are connected to MCBSP0. In terms of working mode, DSP is in master mode and AIC23 is in slave mode, that is, the clock signal of BCLK is generated by DSP.

Interface schematic diagram of C5416 and AIC23

The serial port clock is connected in parallel to the BCLK clock of AIC23 by BCLKX0 and BCLKR0, so that the serial port clock signal can be generated when sending and receiving data. The input/output synchronization signals LRCIN and LRCOUT are used to start serial port data transmission and receive the frame synchronization signal of DSP.

BFSX0 and BFSR0, BDR0 and BDX0 are connected to DIN and DOUT of AIC23 respectively to realize digital communication between DSP and AIC23.

2 System Implementation

2.1 Basic characteristics of speech

Sound is a wave that can be heard by human ears. The vibration frequency of sound is 20 Hz to 20 kHz. Speech is a kind of sound that is produced by human vocal organs and has certain grammar and meaning. The vibration frequency of speech can reach up to 15 kHz.

Speech can be divided into voiced, unvoiced and plosive sounds according to their different excitation forms. The characteristics of human voice are basically determined by factors such as gene cycle and formant. When a voiced sound is produced, the airflow through the glottis causes the vocal cords to vibrate, generating a quasi-periodic excitation pulse train. The period of this pulse train is called the "gene period", and its reciprocal is the "gene frequency".

The human vocal tract and nasal passage can be regarded as vocal tract tubes with non-uniform interfaces. The resonant frequency of the vocal tract tube is called a resonance peak. Changing the shape of the vocal tract produces different sounds. Resonance peaks are represented by multiple frequencies that increase in sequence. For example, F1, F2, F3, etc. are called the first resonance peak, the second resonance peak, etc. In order to improve the quality of speech reception, as many resonance peaks as possible must be used. In practice, the first three resonance peaks are the most important, and the specific situation varies from person to person.

2.2 Speech Enhancement

In actual application environments, speech will be disturbed by environmental noise to varying degrees. Speech enhancement is to process noisy speech, reduce the impact of noise, and improve the auditory environment.

The interference encountered in actual speech may include the following categories:

(1) Periodic noise: such as electrical interference, interference caused by engine rotation, etc. This type of interference appears as some discrete narrow peaks in the frequency domain. In particular, 50 Hz or 60 Hz AC noise can cause periodic noise.

(2) Impulse noise: Such as noise interference caused by electric sparks and discharges. This type of interference appears as a sudden narrow pulse in the time domain. This type of noise can be eliminated in the time domain, that is, the threshold is determined based on the average value of the amplitude of the noisy speech signal.

(3) Broadband noise: usually refers to Gaussian noise or white noise, which is characterized by a wide frequency band that covers almost the entire speech frequency band. It has many sources, including wind, breathing noise and general random noise sources.

2.3 Algorithm Analysis

The influence of noise causes the patient's speech recognition rate to drop significantly, and denoising and compensation are important parts of hearing aids. The human ear responds to sounds between 25 and 22,000 Hz. Most of the available information of speech only exists between 200 and 3,500 Hz. According to the perceptual characteristics of the human ear and experimental determination, the second resonance peak that is more important for speech perception and speech recognition is mostly located above 1 kHz.

2.3.1 Periodic noise elimination

Periodic noise is generally a number of discrete spectral peaks, which comes from the periodic operation of the engine. Electrical interference, especially 50-60 Hz AC noise, can also cause periodic noise. Therefore, using a bandpass filter can effectively eliminate periodic noise and high-frequency sounds above 3,500 Hz.

The design of IIR digital filter can make use of the achievements of mature analog filters, such as Butterworth, Chebyshev and elliptic filters. The linear difference equation of IIR digital filter is:

The real-time filtering effect of the filter on dynamic input data visualized in the Matlab environment is shown in Figure 3.

2.3.2 Broadband Noise Removal Based on Short-Time Spectrum Estimation

Since the short-time spectrum of speech signals has strong correlation, while the front-back correlation of noise is weak, a method based on short-time spectrum estimation is used to estimate the original speech from noisy speech. In addition, the human ear is not sensitive to the phase of speech, so the estimated object can be placed on the amplitude of the short-time spectrum.

2.3.3 Spectral subtraction method

[page]

Spectral subtraction is an effective method in a single-microphone recording system without a reference signal source. Because the noise is locally stationary, it can be assumed that the power spectrum of the noise during speech is the same as that during speech, so the "silent frames" before and after speech are used to estimate the noise.

The principle block diagram and simulation results of the spectral subtraction method are shown in Figure 4 and Figure 5. After the speech signal is windowed, the signal is denoised using the known noise power spectrum information.

Schematic diagram of spectral subtraction method

Simulation results of spectral subtraction method

2.4 Noise Cancellation Method

Noise cancellation is the most basic spectrum reduction algorithm. Its basic principle is to directly subtract noise from noisy speech. Since broadband noise and speech signals completely overlap in time and frequency domains, it is difficult to remove. Therefore, nonlinear processing is required, and the adaptive filter is constantly adjusted.

In Figure 6, one channel collects noisy speech and the other channel collects noise. The noisy speech sequence S(n) and the noise sequence d(n) are Fourier transformed to obtain the spectral components Sk(w) and Dk(w). The noise component Dk(w) is filtered and subtracted from the noisy speech. The phase of the noisy speech is added and restored to the time domain signal through inverse Fourier transform. In the case of strong noise background, this method can achieve a good noise elimination effect.

Noise cancellation in dual-channel acquisition systems

In practice, the two acquisition channels must be isolated to prevent both channels from capturing noisy speech. In order to make the collected noise closer to the noise in the noisy speech, the adaptive filter can achieve this function well.

FIG. 7 is an example of enhanced speech in the left channel obtained by using the noise cancellation method.

Example of enhanced speech in the left channel using noise cancellation [page]

2.4.1 Multi-channel Compression Algorithm

In the case of hearing loss, the hearing threshold generally moves downward, resulting in a reduction in the dynamic range of hearing. The degree of reduction in this dynamic range is related to the frequency, and generally the high-frequency part has a larger loss. In the digital hearing aid signal processing algorithm, the hearing compensation algorithm is one of the most core algorithms. The purpose of the hearing compensation algorithm is to compress and amplify the sound, map the sound within the hearing threshold range of normal people to the hearing range of the deaf, and maintain auditory comfort as much as possible and improve the clarity and recognition of the sound.

The signal is divided into several independent frequency regions by using filters and then synthesized. These frequency regions are called channels. This algorithm is mainly dedicated to processing signals in the time domain. In each channel, different frequency bands are amplified differently according to the patient's hearing loss, and different compression algorithms are used for different frequency components. Finally, the synthesized sound is sent to the patient's ear canal. This method is used here to process the signal to a certain extent. The intermediate frequency signal is appropriately amplified in this system, and the sound reception effect is good. Figure 8 is a three-channel frequency division synthesis diagram.

Three-channel frequency division synthesis diagram

2.5 System Implementation

When the system is implemented, the target board and the PC are connected through the USB interface. The target project is debugged online through CCS.

The main tasks of the target project are to initialize TMS320C5416, manage resources on the board and complete the audio processing algorithm. To correctly write the program for sampling and outputting audio signals, each channel of the MCBSP of TMS320C5416, including 27 related registers, must be correctly set to meet the various timing requirements of TMs320C5416 and other hardware circuit chips (bit synchronization, frame synchronization, clock signal, etc.). Figure 9 is a playback graph of the original voice signal in the system, and Figure 10 is a comparison graph of the original voice and the processed voice in the connection between CCS and DSP hardware.

The playback diagram of the original speech signal in the system

Comparison of original and processed speech in CCS and DSP hardware connection

3 Conclusion

The hearing aid designed in this project is miniaturized, integrated, and convenient. The system can also change parameters and design according to the specific needs of patients to meet the needs of different patients. With the development of society, in certain specific occasions, not only people with hearing impairments but also people with normal hearing need to use hearing aids. Human demand for hearing aids will continue to update, and the exploration and research of problems will also keep pace with the times. The use of hearing aids will better serve mankind, achieve harmonious coexistence between man and nature, and promote harmonious social development.

Keywords:TMS320VC5416DSP Reference address:Design of digital hearing aid based on TMS320VC5416DSP

Previous article:Design of electronic sphygmomanometer system based on oscillometric method
Next article:Sharing of design skills for low-power and high-precision portable multi-function medical examination meter

Recommended ReadingLatest update time:2024-11-17 03:00

[Design]Automotive technology "shifts gears": from HMI to VR technology, we can see what the future of automobiles will look like
When we pay attention to future cars, what do we focus on? Is it the cool appearance or the comfortable in-car experience? What are the future technology prospects? From HMI design to autonomous driving, to VR, what kind of changes are each technological innovation experiencing? What are our expectations for the nex
[Automotive Electronics]
[Design]Automotive technology
UK Pickering Company Updates Cable Design Tool
Added more collaboration features as well as basic project management and security capabilities Free utility simplifies and speeds up cable design, saving time and money In August 2022, Pickering, a leading manufacturer of signal switches and simulation solutions for electronic testing
[Industrial Control]
UK Pickering Company Updates Cable Design Tool
Pickering Cable Design Tool is now available in China
Pickering Cable Design Tool is now available in China. A web tool that can custom design cable assemblies Pickering Interfaces, a leading manufacturer of signal switching and simulation solutions for electronic test and verification, today announced that it has set up servers in China for its popular Cable Design
[Internet of Things]
Pickering Cable Design Tool is now available in China
Latest Medical Electronics Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号