Careful planning leads to successful real-time acoustic treatment

Publisher:丝语轻风Latest update time:2022-04-21 Source: elecfansKeywords:Real-time Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

Low-latency, real-time acoustic processing is a critical factor in many embedded processing applications, including speech preprocessing, voice recognition, and active noise cancellation (ANC). As the requirements for real-time performance in these application areas steadily increase, developers need to think strategically to properly address these requirements. Since many large systems are powered by chips that provide considerable performance, it is tempting to offload any additional tasks that arise onto these devices, but we need to understand that latency and its determinism are very critical factors that can easily cause major real-time system problems if not carefully considered. This article will explore the issues that designers should consider when selecting SoCs and dedicated audio DSPs to avoid unpleasant surprises in real-time acoustic systems.


The applications of low-latency acoustic systems are vast. For example, in the automotive field alone, low latency is critical for personal audio zones, road noise reduction, and in-car communication systems.


As the trend toward electrification of vehicles emerges, road noise reduction becomes more important because there is no internal combustion engine producing noticeable noise. As a result, the noise associated with the car's road contact becomes more noticeable and more disturbing. Reducing this noise not only leads to a more comfortable driving experience, but also reduces driver fatigue. Compared to deploying a low-latency acoustic system on a dedicated audio DSP, deploying on an SoC presents many challenges. These issues include latency, scalability, upgradability, algorithm considerations, hardware acceleration, and customer support. Let's take a look at each one.


Delay

In real-time acoustic processing systems, latency is very important. If the processor cannot keep up with the real-time data handling and computing requirements of the system, unacceptable audio interruptions will occur.


Typically, SoCs are equipped with small on-chip SRAM, so most local memory accesses must rely on cache. This results in non-deterministic use of code and data, and increases processing latency. This alone is unacceptable for real-time applications such as ANC. However, in reality, SoCs also run multi-tasking, non-real-time operating systems that are heavy to manage. This amplifies the non-deterministic operating characteristics of the system, making it difficult to support relatively complex acoustic processing in a multi-tasking environment.


Figure 1 shows a specific example of an SoC running a real-time audio processing load, where spikes in CPU load occur while processing higher priority SoC tasks. For example, these peaks may occur while performing SoC-centric tasks, including media rendering, browsing, or executing applications on the system. When the peak exceeds 100% CPU load, the SoC will no longer operate in real time, which can cause audio dropouts.

pYYBAGIpxYiAINFQAANoruKtlH0704.png

Figure 1. Instantaneous CPU load of a typical SoC running high audio workload processing in addition to other tasks.

Audio DSPs, on the other hand, are architected to achieve low latency throughout the entire signal processing path, from sampled audio input to processing (e.g., sound effects + noise suppression) to speaker output. The L1 instruction and data SRAMs are single-cycle memories closest to the processor core, sufficient to support multiple processing algorithms without the need to dump intermediate data to off-chip memory. In addition, on-chip L2 memory (which is farther away from the core, but still much faster to access than off-chip DRAM) can provide a cache for intermediate data operations when the storage capacity of the L1 SRAM is insufficient. Finally, audio DSPs typically run a real-time operating system (RTOS) to ensure that input data processing can be completed and moved to its destination before new input data arrives, ensuring that data buffers do not overflow during real-time operations.


The actual latency of the system booting up (usually represented by the boot sound) is also an important metric, especially for automotive systems, which require the prompt sound to be played within a certain window after booting up. In the SoC field, a long boot sequence is usually used, which includes booting the operating system of the entire device, so it is difficult or impossible to meet this boot requirement. On the other hand, a stand-alone audio DSP running its own RTOS and not affected by other unrelated system priorities can be optimized to speed up its boot time and meet the boot sound requirement.


Scalability

While latency is an issue for SoCs in applications such as noise control, scalability is another drawback for SoCs that want to perform acoustic processing. In other words, SoCs that control large systems with many different subsystems, such as automotive multimedia head units and instrument clusters, cannot easily scale from the low end to meet high-end audio needs because there is always a conflict between the scalability requirements of each subsystem component, which needs to be weighed against overall SoC utilization. For example, if a front-end SoC is connected to a remote radio module and is adapted to multiple vehicle models, the radio module needs to scale from a few channels to many channels, and each channel exacerbates the real-time issues mentioned earlier. This is because each additional feature under the control of the SoC changes the real-time behavior of the SoC and the resource availability of key architectural components used by multiple functions. These resources include aspects such as memory bandwidth, processor core cycles, and arbitration slots of the system bus structure.


In addition to questions about other subsystems connected to a multitasking SoC, there are scalability issues in the acoustic system itself. These involve scaling from the low end to the high end (for example, increasing the number of microphone and speaker channels in ANC applications) and also scaling the audio experience from basic audio decoding and stereo playback all the way to 3D virtualization and other advanced features. While these requirements do not have the real-time constraints of an ANC system, they are directly related to the choice of the system audio processor.


Using a separate audio DSP as a coprocessor for the SoC is an excellent solution to the audio scalability problem, enabling modular system design and cost-optimized solutions. The SoC can reduce the focus on the real-time acoustic processing needs of large systems and transfer this processing demand to the low-latency audio DSP. In addition, the audio DSP provides code-compatible and pin-compatible options covering several different price/performance/memory capacity levels, giving system designers maximum flexibility in choosing audio performance products suitable for a given product tier.

poYBAGIpxZyAD2oSAAVEkV55_sE546.png

Figure 2. ADSP-2156x DSP, a highly scalable audio processor

Upgradability

With the increasing prevalence of OTA in today's cars, it is becoming increasingly important to upgrade by releasing critical patches or providing new features. This can cause critical issues for SoCs due to the increased dependencies between their various subsystems. First, multiple processing and data movement threads compete for resources on the SoC. This intensifies competition for processor MIPS and memory space when adding new features, especially during peak activity. From an audio perspective, new features in other SoC control domains may have unpredictable effects on real-time acoustic performance. A negative impact of this situation is that new features must be cross-tested on all operating planes, resulting in countless permutations and combinations between the various operating modes of competing subsystems. As a result, the number of software verifications for each upgrade package increases exponentially.


From another perspective, it can be said that the improvement of SoC audio performance depends on the available SoC MIPS in addition to the functional map of other subsystems controlled by the SoC.


Algorithm Development and Performance

Obviously, when developing real-time acoustic algorithms, audio DSPs are designed to achieve mission goals. A significant difference from SoCs is that standalone audio DSPs can provide a graphical development environment that allows engineers who lack DSP coding experience to integrate high-quality acoustic processing into their designs. This type of tool can reduce development costs by shortening development time without sacrificing quality and performance.


For example, ADI's SigmaStudio® graphical audio development environment provides a variety of signal processing algorithms integrated into an intuitive graphical user interface (GUI), making it possible to create complex audio signal flows. It also supports audio transmission using graphical A2B configuration, which is very helpful in accelerating real-time acoustic system development.


Audio Assist Hardware Features

In addition to processor core architectures designed for efficient parallel floating-point calculations and data access, audio DSPs typically use dedicated multi-channel accelerators to run common algorithms such as fast Fourier transforms (FFTs), finite and infinite impulse response (FIR and IIR) filtering, and asynchronous sampling rate conversion (ASRC). This allows real-time audio filtering, sampling, and frequency domain conversion outside the core CPU, thereby improving the effective performance of the core. In addition, because they use an optimized architecture and provide data flow management functions, they help build a flexible and user-friendly programming model.


As the number of audio channels, filter streams, sampling rates, etc. increase, we need to use the most configurable pin interface to support online sampling rate conversion, precision clocks, and synchronous high-speed serial ports to efficiently route data without causing delays or increasing external interface logic. The Digital Audio Interconnect (DAI) of the SHARC® series processors from Analog Devices demonstrates this capability, as shown in Figure 4.

[1] [2]
Keywords:Real-time Reference address:Careful planning leads to successful real-time acoustic treatment

Previous article:Key issues that smart wearable devices need to solve urgently
Next article:Leica launches L1 and L2 watches, Quectel Communications releases high-end 5G smart module SG560D

Latest Embedded Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号