2516 views|0 replies

6555

Posts

0

Resources
The OP
 

Signal Chain Design Considerations for Ultrasound Systems [Copy link]

High-performance ultrasound imaging systems are widely used in various medical scenarios. In the past decade, discrete circuits in ultrasound systems have been replaced by highly integrated chips (ICs). Advanced semiconductor technology continues to promote system performance optimization and miniaturization. These changes are due to various chip technologies, such as dedicated low-noise amplifiers, multi-channel low-power ADCs, integrated high-voltage transmission, optimized silicon processes, and multi-chip module packaging. As chip power consumption and size are reduced to 20% of the original. In addition, thanks to the development of low-power, high-performance silicon processes, some beamforming pre-processing modules have been integrated into general-purpose analog or mixed-signal chips instead of dedicated digital processors. At the same time, advanced high-speed serial or wireless interfaces greatly reduce the complexity of system layout and can transfer as much RF data as possible to system integrated chips (SOCs), CPUs or GPUs. The current application of ultrasound technology has also expanded from specific radiological diagnosis to various portable applications, real-time bedside monitoring, and on-site medical inspections.

This application note reviews the architecture and principles of ultrasound systems, analyzes system design considerations, reviews advanced technologies applied to ultrasound chips, and finally explains the analog parameters of medical ultrasound chips.

Note: This application note is based on the book chapters and papers published by Xu Xiaochen. If you need to reprint, please contact TI and the author.

Table of contents

Signal Chain Design Considerations for Ultrasound Systems

summary

Image List

1. Medical Ultrasound Imaging

2. Principles of sound wave generation and propagation

3. Transducer specifications and image quality

4. Ultrasound imaging mode

A mode and B mode

Doppler ultrasound

Other imaging modes

5. Ultrasonic electronics

Transmitter and Receiver

Beamformer

Digital Signal Processing

6. Process selection in analog front-end design

7. Ultrasonic simulation parameters

Overload recovery

Signal and Noise Modulation in Doppler Applications

Continuous Wave (CW) Doppler Specifications

8. Conclusion

References

Image List

Figure 1. Simplified block diagram of a typical ultrasound system

Figure 2. Transducer vibration, sound wave propagation and reflection

Figure 3. Typical transducers: (a) single element transducer; (b) 1D array transducer; (c) 2D array transducer (Courtesy of USC, Vermon, and Philips)

Figure 4. Scanning modes: (a) A-mode scan line; (b) B-mode image; (c) 3D beam scan; and (d) B-mode (sub-images 1, 2, 3) and 3D (sub-image 4) clinical images (Courtesy of Philips)

Figure 5. Continuous wave (CW) Doppler measurement configuration

Figure 6. Pulsed Wave Doppler Measurement Configuration

Figure 7. Color Doppler imaging: (a) Image acquired in color Doppler and CW mode (courtesy of Philips); (b) Color Doppler showing carotid artery stenosis (courtesy of GE)

Figure 8. Block diagram of a typical ultrasound electronics circuit

Figure 9. Voltage-controlled amplifier for time gain compensation.

Figure 10. Transducer beamformer used to focus the acoustic beam in (a) transmit phase; (b) receive phase.

Figure 11. Schematic diagram of a digital beamformer

Figure 12. Comparison of CMOS and BiCMOS designs

Figure 13. Multi-chip module packaging

Figure 14. Recent significant improvements in AFE

Figure 15. Overload recovery (a) input signal; (b) output signal

Figure 16. PSMR (a) and IMD3 (b) description

Figure 17. Simplified block diagram of CW

Figure 18. Block diagram of mixer operation

1. Medical Ultrasound Imaging

Ultrasound is a sound wave with a frequency higher than 20KHz. Medical ultrasound imaging systems often use frequencies between 1 MHz and 20 MHz, which can achieve sub-millimeter resolution. The first commercial ultrasound imaging system was born in the 1970s, providing real-time 2D brightness or grayscale images. Today, ultrasound imaging has become an important medical imaging technology due to its safety, cost-effectiveness and real-time advantages. Medical ultrasound systems can effectively monitor infant development and can also be used to diagnose diseases of internal organs such as the heart, liver, gallbladder, spleen, pancreas, kidneys, and bladder.

A typical ultrasound system includes a piezoelectric transducer, electronic circuits, an image display unit, and DICOM (Digital Imaging and Communications in Medicine) compliant software. A simplified block diagram of a typical ultrasound system is shown below.

查看详情

Figure 1. Simplified block diagram of a typical ultrasound system

2. Principles of sound wave generation and propagation

Ultrasonic transducers are a key component of ultrasonic systems and consist of piezoelectric elements, connectors, and support structures. The piezoelectric effect refers to the phenomenon that the physical dimensions of a material change in response to an applied electric field, and vice versa. As shown below, most transducers in ultrasonic applications are dual-mode. The transducer converts electrical energy into mechanical energy during the transmit phase (mode). The generated mechanical waves propagate toward the medium and are reflected if the medium is inhomogeneous. In the receive mode, the reflected mechanical waveform is received and converted into an electrical signal by the transducer.

查看详情

Figure 2. Transducer vibration, sound wave propagation and reflection

After the transducer is electronically excited, sound waves are generated and propagate through the medium. In medical ultrasound, the FDA (Food and Drug Administration) requires all imaging systems to meet instantaneous, peak, and average intensity limits.

We typically define transducer sensitivity or transducer insertion loss (IL) as the ratio between the received (Rx) and transmitted (Tx) signal amplitudes as follows:

查看详情The transducer frequency is determined by the thickness of the piezoelectric material L and the speed of sound in the material , c m :

查看详情As mentioned previously, the common frequency range is 1MHz to 20MHz. Based on the above equation, higher frequency transducers require thinner materials. Therefore, building very high frequency transducers is somewhat challenging.

Transducer frequency response or bandwidth is another key parameter. As a general rule, if the transducer is excited by a pulse signal (i.e., a short spike), the duration of the received echo determines the transducer bandwidth. Transducers with extremely fast responses (i.e., short echoes) are broadband transducers, and vice versa. A wider bandwidth is generally preferred in most applications. At the same transducer frequency, broadband transducers can achieve better axial resolution because the echo length determines the axial resolution of the ultrasound system. At the same time, broadband transducers are suitable for harmonic imaging, in which ultrasound energy is transmitted at the fundamental frequency and the image is reconstructed from the second harmonic of the received echo. Without a wide bandwidth transducer, the transducer sensitivity drops significantly at its harmonic frequency point 2f0 . Therefore , many transducer researchers are constantly exploring new materials, new architectures, and new manufacturing processes to further improve transducer performance.

In the early days of ultrasound imaging, multichannel electronics for ultrasound systems were expensive and immature. Single-element transducers driven by motors and mechanically scanned were widely used to obtain two-dimensional (2D) images. Early systems could not achieve high frame rates or high-precision imaging due to the speed and precision limitations of the mechanical structure. Today, mature array transducers and multichannel electronics support transducers with 64 to 512 elements. Images up to > 100 frames/second can be obtained based on electronic scanning. To achieve electronic scanning, beamforming techniques are applied to focus the transducer's acoustic beam. The details of beamforming are discussed in the next section. Similar to optical imaging systems, ultrasound systems achieve the best spatial resolution at the focused focal point. Depending on the application, one-dimensional (1D) array transducers include linear arrays, curved linear arrays, and phased arrays. The main differences between these transducers are the beam shaping structure, imaging range, and image resolution. In addition, the latest 2D array transducers consisting of more than 2000 elements can support real-time three-dimensional (3D) imaging. The figure below shows a single-element transducer, a 1D array transducer, and a 2D array transducer.

查看详情

Figure 3. Typical transducers: (a) single element transducer; b) 1D array transducer; (c) 2D array transducer (Courtesy of USC, Vermon, and Philips)

3. Transducer specifications and image quality


As with any imaging system, image quality is an important criterion in medical ultrasound imaging. Common parameters such as spatial resolution and image penetration are mainly determined by transducer specifications and sound wave propagation theory. The longitudinal and lateral resolution of ultrasound images are linearly related to the wavelength of the sound waves in the medium:

查看详情In the equation, c is the speed of sound in the medium, Z f is the focal length, and 2r is the transducer aperture or diameter. When the transducer is excited by a pulse signal, τ -6dB is the duration of the -6dB pulse width of the received echo. τ -6dB is also linearly related to the wavelength λ. For the broadband array transducer, we can compare the lateral resolution of 5MHz and 12MHz, respectively, with an operating frequency of 5MHz to 14MHz. The imaging depth is 5cm. In both cases, 64 transducer elements form the effective aperture. The spacing between the elements is 0.3mm. The speed of sound in the medium is 1540m/s. The effective aperture size is 19.2mm. According to the formula, for 5MHz and 12MHz sound waves, λ is 0.31mm and 0.13mm, respectively. According to the above equation, the lateral resolution is 0.8mm at 5MHz and 0.33mm at 12MHz, respectively. Therefore, higher frequency applications achieve better resolution.

In practice, it is not entirely feasible to improve image quality simply by increasing the transducer frequency. On the one hand, higher frequency transducers require thinner piezoelectric materials, which require more sophisticated manufacturing techniques and are more expensive. On the other hand, as shown in the following sections, higher frequency sound waves are easily attenuated in biological tissues.

When the medium is inhomogeneous, part of the energy of the sound wave can be reflected at the boundary between two media. The unreflected sound wave continues to propagate until it is reflected at the next boundary, or completely attenuated. The reflection and transmission coefficients are determined by the difference in the acoustic impedance ( Z = ρc ) of the two media. In the equation, ρ and c are the density and speed of sound of the medium, respectively, and the wave propagation direction is assumed to be perpendicular to the boundary.

查看详情Table 1 shows the properties of selected biological tissues, water, and air. Strong reflection signals occur when two acoustic impedances are very different. Bone is dense and has a fast speed of sound; therefore, it is always a strong reflector in ultrasound images. On the other hand, blood and liver have similar acoustic impedances, so the reflection between these two tissues is weak. Only a highly sensitive transducer can pick up weak signals. As shown in Table 1, the signal attenuates during propagation. The accumulated attenuation increases with the propagation distance. The attenuation is calculated using Equation 7, where the coefficient 2 reflects the bidirectional propagation of the sound wave.

查看详情In a typical application of ultrasound to detect tissues in the body, the dynamic range between the echo from the human body surface and the echo from the internal organs can easily exceed 100dB. We can assume an average attenuation coefficient of 0.7dB/MHz×cm and a 7.5MHz transducer. At a depth of 10cm, based on equation 7, 7.5×0.7×10×2dB, the calculated attenuation is 105dB. Assuming the surface echo is 1Vpp, the amplitude of the internal organ echo is <10uVpp, which is very weak. This example shows that the ultrasound signal has an extremely wide dynamic range to characterize the differences in physiological structures from the skin surface to the internal organs. Therefore, complex electronic circuits are required to provide sufficient dynamic range, which is not easy to achieve under a limited power budget.

查看详情Table 1: Acoustic properties of typical tissues and media

4. Ultrasound imaging mode:

When the transducer receives the echoes, an appropriate processing unit is needed to convert these signals into understandable image information for the sonographer or other end user. Ultrasound imaging uses several imaging modalities to study tissue characteristics, body fluid distribution and flow, organ function, etc.

A mode and B mode

In the earliest ultrasound systems, clinical diagnosis was guided by displaying the amplitude of the echo and its time domain information. That is, the A-mode (amplitude mode) ultrasound imaging system, as shown in the figure below, is based on one-dimensional line-by-line scanning. Since human vision is more sensitive to images, the development of brightness or grayscale imaging modes has more positive clinical significance. To construct a 2D image, the transducer's sound beam needs to be scanned over a specific area, and multiple A-mode scan lines are obtained during the scan. These scan lines constitute a frame of the image, and the echo amplitude along the scan line is mapped to the pixel value in a linear or nonlinear manner. When the transducer's sound beam scans fast enough, real-time images can be achieved. These images are called B-mode (brightness mode) images, which create a cross-sectional image parallel to the scanning direction.

查看详情

Figure 4: Scanning modes: (a) A-mode scan line (b) B-mode image (c) 3D beam scan and (d) B-mode (sub-images 1, 2, 3) and 3D (sub-image 4) clinical images (Courtesy of Philips)

More and more novel imaging modes (such as 3D and 4D imaging) have recently been introduced on the latest commercial ultrasound systems, which are extensions of B-mode imaging. 3D imaging is a superposition of multiple cross-sectional B-mode images acquired by scanning the acoustic beam in two dimensions, as shown in Figures (c) and (d) above. In addition, 4D imaging is defined as real-time 3D imaging.

Doppler ultrasound

Most clinical ultrasound systems include another essential feature: Doppler ultrasound to show blood flow information. The Doppler effect describes the shift in wavelength due to the motion of an object in a medium. If a wave is transmitted from a source moving away from the observer, its wavelength increases, and vice versa. Therefore, when a sound wave propagates and is reflected by a moving object in the body, the wavelength of the transmitted pulse and the received echo are different. This frequency difference is the Doppler shift, which can be used to calculate the velocity of the moving object:

查看详情In the equation,  is the Doppler frequency shift, f0 is the center frequency of the transmitted pulse, c is the speed of sound in the medium, and θ is the angle between the ultrasonic beam and the moving object.

Doppler ultrasound has been used in medical applications since the 1950s. Today, it can assess blood flow and tissue motion. Over the past 60 years, a variety of Doppler techniques have provided different diagnostic information, including continuous wave (CW) Doppler, pulsed wave (PW) Doppler, and color Doppler. There are large differences in the applications between these Doppler modes.

查看详情

Figure 5. Continuous wave (CW) Doppler measurement configuration

Continuous wave Doppler is the earliest adopted technology, which is achieved by extracting the Doppler shift frequency from the received echo. Its measurement setup is shown in the figure above. Two transducers Tx and Rx are used in the measurement . When Tx transmits a continuous wave, Rx receives the echo from any reflector. For example, if Tx sends a cosine wave into the medium, Rx detects the frequency-shifted cosine signal from the moving reflector:

查看详情

Where ω c is the center frequency of the transducer and ω d is the Doppler shift introduced by the moving object (which can be extracted by demodulating with a mixer). This technique can measure very high-speed blood flow due to heart valve leakage, as well as very low-speed blood flow in deep veins. To address the challenges of low phase noise and low thermal noise of CW circuits, separate analog processing circuits are usually required for the CW path. . As mentioned earlier, the axial resolution of ultrasound images depends on the echo pulse width. In CW operation, the pulse width is infinite; therefore, the axial resolution is poor, or the axial blood flow information is averaged. The lateral resolution depends on the focal area where the two transducers overlap. In general, the main disadvantage of CW measurements is their limited spatial accuracy, because CW can also detect irrelevant signals generated by irrelevant areas. Generally speaking, the performance of CW modules is the key indicator that distinguishes high-end systems from low-end systems.

查看详情Figure 6. Pulsed Wave Doppler Measurement Configuration

Pulsed wave (PW) Doppler technology was born in the 1960s to solve the problem of poor spatial resolution of CW. PW Doppler is based on the same B-mode imaging setup, so it is a new feature of the original B-mode ultrasound system. Demodulation and sample-and-hold techniques are used to extract flow information. The experimental setup of a PW Doppler system is shown in the figure above. In this system, only one transducer is required, and the shaded area shows the sample volume determined by the axial resolution (pulse duration) and lateral resolution of the transducer. Typically, the transducer sends a 4-16 cycle sinusoidal signal at a specific cycle repetition frequency (PRF) and receives the reflected signal. Because the received signal is scattered by moving particles in the blood flow (such as red blood cells and white blood cells), the reflected echo 1 at time 1 is slightly out of phase with the reflected echo 2 at time 2. The received signal is amplified and processed to extract the phase shift frequency. Compared with CW Doppler, PW Doppler detects flow velocity in a limited region of interest (ROI), where the common transducer is also used for B-mode imaging. By modifying the signal processing software, the PW Doppler function can be implemented on the B-mode imaging platform.

In CW and PW Doppler modes, flow information is obtained from a focused acoustic beam, similar to A-mode imaging. In the 1980s, researchers completed the visualization of 2D information of blood flow distribution based on color Doppler technology. Color Doppler processing is also based on the B-mode/PW-mode signal path. Multiple frames of RF data are collected from the region of interest. There are data differences between image frames due to blood flow in the region of interest. Two algorithms, autocorrelation in the phase domain and cross-correlation in the time domain, can extract data variance (i.e., blood flow velocity and direction information) from the RF data:. Blood flow information including velocity and direction is mapped accordingly according to a predefined color gradient bar. Typically, blue and red codes identify blood flow moving toward and away from the transducer, respectively. Brighter colors are used when the flow velocity increases and vice versa. The color-mapped 2D distribution is always superimposed on the B-mode image to simultaneously display individual anatomical structures and blood flow in real time. It is extremely useful for diagnosing cardiovascular diseases such as vascular occlusion and heart valve regurgitation. A typical color Doppler image is shown in the figure below, (b) showing the change in blood flow velocity caused by carotid artery stenosis.

查看详情

Figure 7. Color Doppler imaging: (a) Image acquired in color Doppler and CW mode (courtesy of Philips); (b) Color Doppler showing carotid artery stenosis (courtesy of GE)

Color Doppler remains an active area of research. It is well known that autocorrelation and cross-correlation processing techniques require strong computing power. New algorithms are being developed to analyze blood flow at a lower computational cost. At the same time, thanks to recent advances in semiconductor technology, digital signal processors with lower power consumption and higher computing power are being applied in this field.


Other imaging modes

B-mode, CW Doppler, PW Doppler and Color Doppler are the main imaging modes in ultrasound systems. We briefly introduce other imaging modes that are often used in daily diagnosis to obtain more comprehensive clinical information.

Motion mode (M-mode) is based on B-mode; it captures cardiac motion over time and indicates defective valve or ventricular chamber function.


Tissue harmonic imaging (THI) became popular in the 1990s and is now a standard imaging mode in new systems. Harmonic signals are generated by distortions in acoustic wave propagation in tissue. In THI, these harmonics are extracted to achieve improved image contrast and resolution, reduced artifacts, and increased signal-to-noise ratio (SNR). Since the late 1990s and early 2000s, techniques such as coded excitation have also been developed and applied clinically. It is well known that good axial resolution requires short pulse duration (i.e., low transmitted acoustic energy), and to increase the SNR, we want to increase the pulse duration. By optimizing the matched filter and excitation code, we can still achieve similar axial resolution with short pulses with coded excitation long pulses.

Contrast agents composed of biosafe gas/microbubbles can significantly improve SNR and contrast because these microbubbles are perfect acoustic reflectors. Contrast agent-enhanced imaging can help cardiovascular diagnosis. In addition, these microbubbles have stronger nonlinear characteristics than normal tissue and are suitable for harmonic imaging.

Medical ultrasound is a safe and low-cost medical imaging method that complements MRI, optical, and PET systems. Multi-modality systems can take advantage of the complementary strengths and weaknesses of each imaging modality to obtain the best diagnostic information. For example, photoacoustic imaging can combine the deep penetration of ultrasound imaging with the high contrast of optical imaging. MRI-guided ultrasound therapy is another example of a multimodality approach.

5. Ultrasonic electronics

The following block diagram represents a typical ultrasound system. The main components include high voltage transmit circuitry, low noise analog front end, transmit and receiver beamforming circuitry, digital signal processing unit, image display and storage unit, and other supporting circuits. 查看详情Figure 8. Block diagram of a typical ultrasound electronic circuit

Ultrasonic transmitting circuit and receiving circuit

In current systems, multi-channel transmitters are used to excite array transducers. Depending on the available imaging mode, the transmit voltage varies from ±2V to ±100V. Sometimes, to reduce system cost, a high-voltage multi-way switch is used to switch one transmitter channel to multiple transducer elements. In low-end to mid-end systems, square-wave based high-voltage transmit circuits are selected because of their high integration and low cost. In high-end systems, high-voltage linear amplifiers can generate a variety of complex waveforms. On the transducer, high-voltage transmit signals and low-voltage echoes coexist. Therefore, the T/R switch circuit is located between the high-voltage transmit circuit and the low-noise amplifier, and its main function is to protect the low-voltage amplifier. Ultrasonic signals can be significantly attenuated depending on their propagation distance or time. Therefore, in the receiver, the gain increases as the ultrasonic propagation time increases. This important feature is called time gain compensation (TGC), and usually requires a voltage-controlled amplifier (VCA) as shown below. After amplification and pre-processing, the signal is digitized and passed to the receiver beamformer or continuous wave (CW) Doppler processing unit, where a mixer extracts the Doppler signal in the audio range (20Hz to 20KHz).

查看详情

Figure 9. Voltage-controlled amplifier for time gain compensation.

Over the past 30 years, ultrasound front-end electronics have evolved from discrete circuits to integrated circuit chips. Various ultrasound front-end solutions have greatly simplified system design and reduced costs.

Considering the extreme requirements of ultrasound front-end electronics, such as >100dB dynamic range and 20Hz~>GHz operating frequency, every small improvement requires a lot of R&D work at the transistor level, chip level, board level and system level. Similar to most mixed-signal systems, good analog output is always the basis for subsequent signal processing and image quality improvement. Low power consumption, low noise and compact size are the top considerations for ultrasound front-end electronics design.

Beamformer

The beamformer includes a transmit and receive beamformer to electronically focus and control the acoustic beam of a multi-element transducer. As shown in the figure below, the distance from one transducer element to the target is different from the distance from another element to the target; therefore, in the transmit phase, the transmitted signal is appropriately delayed for each element so that the transmitter signal reaches the target at the same time and generates the highest sound intensity at the target, that is, to obtain the strongest echo. In the receive phase, the echoes from multiple transducer elements are linearly superimposed by applying appropriate delays to the received echoes to achieve the highest sensitivity.查看详情

Figure 10. Transducer beamformer used to focus the acoustic beam in (a) the transmit phase and (b) the receive phase.

Since the transmit circuit is mainly digital, the transmit delay is implemented by high-speed counters such as field programmable gate arrays (FPGAs) or digital signal processors (DSPs). Due to the complexity of the received signal, the receive beamformer obviously requires more algorithm optimization to be implemented. The early discrete transistor-based electronic circuits had limited signal processing capabilities. Therefore, the receiver beamforming beamformer was implemented as an analog delay line based on an inductor-capacitor combination. In the 1980s, the receiver beamformer began to use multi-channel analog-to-digital conversion chips and digital beamforming technology.查看详情

Figure 11. Schematic diagram of a digital beamformer

In current mainstream ultrasound systems, receive beamformers are generally digital. Digital beamformers are usually implemented in FPGAs, DSPs, PCs, or GPUs (graphics processing units) with extremely high computing power. As mentioned earlier, larger transducer apertures allow for better resolution. Therefore, in high-end ultrasound systems, 256 transducer elements form a focused beam to obtain fine resolution images. Therefore, the computing power required for high-end beamformers is quite complex.

Biological tissues are heterogeneous in shape, density, speed of sound, etc. Real-time delay calculation and calibration are based on the acoustic properties and shape of the tissue involved. Due to the importance and complexity of beamformer design, most ultrasound companies have their own IP. Simplifying beamformer design without compromising beamforming performance remains a hot topic. It is believed that new beamformer architectures are being developed that will be widely used in future ultrasound systems.

Digital Signal Processing

Ultrasound signals require a lot of signal processing to extract the information required for each imaging mode from the raw ultrasound data. The main processing modules include B-mode image reconstruction, Doppler spectrum information extraction based on fast Fourier transform, color Doppler calculation based on autocorrelation and cross-correlation, ultrasound image scanning coordinate conversion (2D ultrasound coordinates to Cartesian coordinates), image enhancement, etc. At present, commercial processors such as field programmable gate arrays (FPGAs) and digital signal processors (DSPs) are widely used. FPGAs enable system designers to hardwire internal logic gates and optimize the efficiency of their algorithms. On the other hand, DSPs provide system designers with predefined standard computing modules that can be changed and optimized in real time. In other words, FPGAs win with hardware efficiency, while DSPs win with software flexibility. New signal processors such as PCs and GPUs; their computing power is higher than FPGAs and DSPs, and the software development cost is much lower than FPGAs and DSPs. However, due to the high power consumption of PCs and GPUs, they are not necessarily suitable for low-power portable systems.

6. Process selection in analog front-end chip design

Before any AFE design, semiconductor process selection is always the first key consideration based on the design goals. CMOS and BiCMOS processes are the most commonly used processes in ultrasound analog front-end design. Each of them has its own advantages and is suitable for the corresponding circuit blocks .

The BiCMOS (bipolar CMOS) process is currently more popular than pure bipolar processes because it contains high-performance bipolar transistors for analog design and CMOS components for digital design. Bipolar transistors are suitable for low-noise amplifier design, with ultra-low 1/f noise, wide bandwidth, and good power/noise efficiency. The bipolar process also reduces circuit capacitance to obtain good total harmonic distortion. Therefore, amplifiers based on bipolar or BiCMOS processes can achieve the same performance in a much smaller area and lower power consumption than amplifiers based on CMOS processes.

Texas Instruments' 0.35um BiCMOS process was used to study the performance impact of amplifier designs between bipolar and CMOS devices. The figure below (a) shows that bipolar transistor-based amplifiers achieve lower noise at the same bias current; it also shows that bipolar transistors have ultra-low 1/f noise characteristics, which is critical for Doppler applications with modulation and demodulation circuits; (b) the bipolar design significantly reduces the area compared to a similar CMOS design. Of course, as the feature size of semiconductor processes decreases, the area difference between the 0.35um BiCMOS process and the <0.35um CMOS process becomes smaller. However, in general, the 0.35um BiCMOS process is still extremely suitable for amplifier design due to the advantages mentioned above.

查看详情Figure 12. Comparison of CMOS and BiCMOS process designs

CMOS technology is more suitable when the circuit has more digital content and switching elements (such as medium-speed ADC). The frequency of medical ultrasound signals is in the range of 1~20MHz, and its ADC sampling rate is usually less than 100MSPS, which can be easily processed by most current CMOS processes. Using 0.18um~65nm CMOS technology, ADC design can achieve better integration and power consumption reduction. In addition, compared with comparable BiCMOS processes, CMOS processes are generally lower in cost and achieve shorter manufacturing cycles. All of this shows that CMOS technology is suitable for ADC design in ultrasound AFE.

In summary, when reducing noise/power consumption is the main goal, BiCMOS process is suitable for TGC amplifier design in ultrasound AFE, that is, voltage-controlled amplifier (VCA) design. On the other hand, CMOS process is a good choice for achieving low power consumption and high integration in ADC design. Especially at the 0.18um to 65nm node, CMOS process with a complete low-voltage digital library can achieve higher integration at a competitive cost compared to 0.35um BiCMOS process.

It is clear that the combination of BiCMOS VCA and CMOS ADC can achieve an excellent analog front-end solution with noise <0.8nV/rtHz and power consumption <150mW/CH. This combination requires not only dedicated semiconductor processes but also advanced packaging technology. The figure below shows an analog front-end solution with two chips in the same package. In fact, more than two chips and multiple passive components can be integrated. In addition, multi-chip modules (MCMs) can provide greater flexibility for system design. For example, if there is a newer ADC or VCA solution, it can replace one of the solutions in the old AFE solution and still maintain pin-to-pin compatibility for better performance.

查看详情Figure 13. Multi-chip module packaging

In the past decade, the process technology of ultrasonic AFE has moved from 0.5um to 90nm, from CMOS only to BiCMOS and CMOS, and from single chip to multiple chips with passive components in a package. As shown in the figure, all these technologies have greatly reduced power consumption, improved performance and reduced chip size.

查看详情Figure 14. Development of AFE integration

7. Main parameters of ultrasonic simulation circuit

Ultrasonic signals have their own characteristics. As we discussed in the previous section, dynamic ranges exceeding 100dB are often observed in systems. Low-frequency audio circuits, high-frequency digital circuits, low-noise amplifiers, and low-noise clock circuits exist in the same system, on the same board, or even on the same chip. AFE design and system design must address these challenges.

Overload recovery

Overload signals usually refer to large leakage signals or strong echo signals of high-voltage transmit pulses through high-voltage transceiver switches (T/R switch). If overload recovery is not considered in the AFE design, they will degrade the transient response performance of the LNA, PGA, ADC, and CW circuits. Analog designers are faced with the challenge of achieving transient recovery response and consistent response performance over a large dynamic range under limited power budget conditions. As a more common design solution, sufficient current and voltage limiting techniques should be applied in the design of high-voltage transceiver switches first, which can eliminate the overload effect on the first stage of the analog front end, the low noise amplifier. In LNA design, clamping diodes usually prevent the LNA from further saturation.

Two common overload conditions are analyzed. The first is due to the high-voltage transceiver switch being turned on. Considering that the dead time of ultrasound imaging is usually around 3 to 5us, the overload recovery time of the ultrasound analog front end must reach the microsecond level. Currently, the high-voltage transceiver switch based on MOSFET only allows << 1Vpp transmission leakage to pass through; while the high-voltage transceiver switch based on diode bridge has a leakage voltage of up to 2Vpp. Therefore, most AFEs are designed to handle ~2Vpp overload signals to meet the performance of various transceiver switches. Another overload condition is due to the large reflection signal from the blood vessel wall. The ultrasound analog front end must recover immediately to detect the small echo in the blood. The second condition is extremely common in Doppler applications, and its performance determines the sensitivity and accuracy of blood flow detection. The figure below shows the reaction of a strong echo from the simulated blood vessel wall and then a small signal from the blood. The following signal has a dynamic range of 60dB, namely a 5-cycle 250mVpp signal and a 5-cycle 250uVpp signal at 5MHz; the small signal is configured to have a 0° or 180° phase shift. The figure below shows the response of an ultrasound analog front end and the difference between the 0° and 180° responses, which is similar to phase detection in Doppler applications. The extraction of small signals and phase differences ensures good performance in Doppler applications.

查看详情Figure 15. Overload recovery (a) input signal; (b) output signal

In addition to fast overload response and accurate phase detection, consistency of multiple overload recovery responses is critical for spectral Doppler and color Doppler applications. Consistent overload recovery reduces spectral or color noise in the system. You can evaluate consistency by comparing the differences in overload responses from multiple signals.

In addition, harmonic imaging is standard in most systems. Pulse inversion imaging is widely used. Therefore, the system AFE ensures a symmetrical overload response to positive and negative pulses. Finally, multiple different types of images are often superimposed in ultrasound systems for diagnosis, such as duplex mode, B mode conversion to Doppler mode, and even tri-mode. Each operating mode uses a different transmit voltage and duty cycle transmit waveform. Therefore, the AFE needs to respond quickly to different overload signals within two or more image lines. When switching image modes quickly, different overload signals should not affect the consistency of the AFE overload performance.

Signal and Noise Modulation in Doppler Applications

Ultrasound systems are complex mixed-signal systems with various digital and analog circuits. Digital signals and clock signals can interfere with analog signals at the system or chip level. In addition, nonlinear components such as transistors and diodes can modulate noise and interfere with RF signals.

In ultrasound Doppler applications, modulation effects in the system can affect image quality and sensitivity. The Doppler signal frequency ranges from 20Hz to >50Khz. At the same time, the timing signals of multiple systems are also in this range, such as frame clock, imaging line clock, etc. These noise signals can enter the chip through the ground, power and control pins. It is important to study chip-level modulation effects such as the power supply modulation ratio (PSMR). A noise signal with a certain frequency and amplitude can be applied to the power pin. If there is a modulation effect, sideband signals can be found. PSMR is expressed as the amplitude ratio between the carrier and the sideband signal, as shown below:

查看详情

Figure 16. PSMR (a) and IMD3 (b) description

In addition to PSMR, third-order intermodulation interference (IMD3) is a key parameter to measure the performance of mixed-signal ICs. At the same time, in ultrasound applications, the input signals used for IMD3 measurement have different amplitudes, which represent large echoes from static tissue and small Doppler signals from flowing blood, and the amplitude difference can be around 20 to 30dB. System designers can use IMD3 to estimate the artifacts generated by Doppler image frequency signals. A dynamic range of 40 to 50dB is commonly used in Doppler spectrum displays. Therefore, an IMD3 better than 50dBc should not affect system performance.

Continuous Wave (CW) Doppler parameters

As a key function of mid-to-high-end systems, CW Doppler has begun to become a standard feature of portable systems. Compared with the TGC path, the CW Doppler path has the advantages of processing a larger dynamic range and lower phase noise. In addition, due to these characteristics, CW Doppler beamforming is usually implemented in the analog domain. Various beamforming methods are used in ultrasound systems, including passive delay lines, active mixers, and passive mixers. In the past few years, the mixer-based CW Doppler structure has gradually become dominant due to its small size, easy implementation, and good flexibility in supporting multiple CW frequencies. In addition, the CW Doppler beamformer has been integrated on the same chip as the TGC path. In addition, the passive mixer not only reduces power consumption and noise, but also meets the processing requirements of CW Doppler, such as wide dynamic range, low phase noise, precise I/Q channel gain and phase matching.

The simplified continuous wave Doppler path block diagram is shown below. The entire CW path includes LNA, voltage-to-current converter, passive mixer based on switching circuit, adder with low-pass filter and clock circuit. Most modules include in-phase and quadrature channels with strict symmetry in performance to achieve good image frequency suppression and beamforming accuracy.

查看详情Figure 17. Simplified block diagram of CW

The following diagram and equations describe the principle of mixer operation.

查看详情

Figure 18. Block diagram of mixer operation

In the equation, Vi(t), Vo(t) and LO(t) are the mixer input, output and local oscillator signals respectively. Vi(t) includes higher harmonics; LO(t) represents a square wave, which contains odd harmonic components, as shown in the following equation:

查看详情

According to the equation, the 3rd and 5th harmonics from LO(t) can modulate with the 3rd and 5th harmonics or broadband noise in these frequency bands in Vi(t). Therefore, the noise performance of the mixer will be degraded. To avoid this undesirable effect, a harmonic suppression circuit is required at the LNA output or the mixer clock input to achieve a better noise figure. According to the above equation, the conversion loss of the mixer is about 20log2/π, which is about -4dB.

Better than -46dBc image frequency suppression is a desired parameter in CW imaging. CW I/Q channel matching can also help image frequency components. Literature shows that 0.25° I/Q phase error gives -53dBc suppression; and 0.05dB I/Q gain error gives -50dBc suppression. They are the design goals for the CW path. Therefore, the CW I/Q path requires tight gain and phase matching. Low tolerance resistors (0.1%) are usually used in op amp based active filters.

The typical CW Doppler shift frequency is between 100Hz and 20KHz. Due to the mixing characteristics, the phase noise of the CW signal path dominates the low blood flow velocity. Therefore, most AFEs use the CW phase noise at a carrier frequency offset of 1KHz as the main performance indicator.

Finally, the dynamic range of the CW path is based on the input referred noise and the maximum input signal:

查看详情To achieve good CW performance, a transmitter and receiver circuit dynamic range of >160dBFS/Hz is required.

8. Conclusion

Ultrasound imaging is a safe medical imaging modality with great potential. An increasing number of bedside applications for on-site examinations require low power, low noise and compact systems. In order to fully utilize the advantages of ultrasound signals, the right process must be selected to achieve low power, low noise and small size. The BiCMOS process is suitable for low noise amplifier design with ultra-low 1/f noise, wide bandwidth and good power/noise efficiency; while the CMOS process achieves high digital density at low power consumption. The combination of the two using advanced packaging technology can provide the most advanced analog front-end solutions. In order to achieve the required ultrasound parameters, such as fast and consistent overload recovery, low IMD3 and PSMR, precise I/Q matching, odd harmonic suppression in continuous wave Doppler mixers, etc., it is necessary to consider the various parameters in the chip to achieve comprehensive optimization of the design.

References

  • Xiaochen Xu, “Challenges and Considerations of analog front ends design for portable ultrasound systems”, 2010 IEEE Utlrasonics symposium.
  • Xiaochen Xu, “Impact of Highly Integrated Semiconductor Solutions for Ultrasound System”, 2016 Transducer Conference, University of Southern California.
  • Xiaochen Xu, etc. "Handbook of Research on Biomedical Engineering Education and Advanced Bioengineering Learning", ISBN. 978-1466601222, 2012.
This post is from Analogue and Mixed Signal

Guess Your Favourite
Just looking around
Find a datasheet?

EEWorld Datasheet Technical Support

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京B2-20211791 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号
快速回复 返回顶部 Return list