Embedded Audio Processing Basics

Publisher:HarmonySpiritLatest update time:2012-03-19 Source: 电子产品世界 Keywords:Embedded Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

Audio DAC
Traditional D/A conversion methods include weighted resistors, R-2R ladder networks, and zero-crossover distortion. As in the case of A/D, Σ-Δ designs dominate the field of D/A conversion. These converters can accept a 16-bit 44.1kHz signal and convert it to a 1-bit sample stream at 2.8224MHz using an interpolation filter. These super-sampled sample streams are then converted to analog signals using a 1-bit DAC.

A typical embedded digital audio system may use a sigma-delta audio ADC and a sigma-delta DAC, so the conversion between the PCM signal and the oversampled stream is performed twice. For this reason, Sony and Philips (NXP Semiconductors) have introduced a different format from PCM in their Super Audio CD (SACD) format, called Direct Stream Digital (DSD). This format stores data in a 1-bit high-frequency (2.8224MHz) sigma-delta stream, thus bypassing the PCM conversion. The disadvantage is that the DSD stream is not as intuitive as PCM and requires a separate set of digital audio algorithms.

Connecting to Audio Converters: An ADC Example
Okay, we have enough background. Now let's look at an example of actual converter connections. A good choice for a low-cost audio ADC is the Analog Devices AD1871, which uses sigma-delta technology to convert 24 bits at 96kHz. Figure 3a shows the functional block diagram of the AD1871. The converter has two input channels, left (VINLx) and right (VINRx), which is just another way of saying it can handle stereo data. The digitized audio data is streamed serially out of the data port, usually to a corresponding serial port on the signal processor (such as the SPORT interface on a Blackfin processor). There is also an SPI (Serial Peripheral Interface) port, which allows the host processor to configure the AD1871 through software commands. These commands include methods to set the sampling rate, word length, channel gain and muting, among other parameters.


Figure 3 (a) Functional block diagram of the AD1871 audio ADC
(b) Seamless connection between an ADSP-BF533 media processor and the AD1871

As the block diagram in Figure 3b indicates, the AD1871 ADC interfaces seamlessly with the Blackfin processor. The analog portion of the circuit is simplified because only the digital signals are important in this discussion. The oversampling rate for the AD1871 is provided by an external crystal. The processor shown has two serial ports (SPORTs) and an SPI port for interfacing with the AD1871. The SPORT, configured in I2S mode, is the port for the data connection to the AD1871, while the SPI port is used as the control connection.

The I2S protocol is a standard developed by Philips (NXP Semiconductors) for digital transmission of audio signals. This standard enables devices produced by audio equipment manufacturers to be compatible with each other.

Specifically, I2S is a 3-wire serial interface for transmitting stereo data. As shown in Figure 4a, it specifies the bit clock (middle), data line (bottom), and left and right synchronization lines (top), and the left and right synchronization lines are used to select whether the data frame currently being transmitted is for the left channel or the right channel.

Essentially, I2S is a time-division multiplexed (TDM) serial stream with two active channels. TDM is a method of transmitting more than one channel (such as left and right) over a single physical link.

In the AD1871 circuit structure, the ADC can reduce the 12.288 MHz sample rate it receives from the external crystal while driving the SPORT clock (RSCLK) and frame synchronization (RFS) lines. This configuration ensures that sampling and data transfer are synchronous.

The SPI interface, shown in Figure 4b, was designed by Motorola to connect the host processor to various digital devices. This interface between the SPI master and the SPI slave consists of a clock line (SCK), two data lines (MOSI and MISO), and a slave select line (SPISEL). One of the data lines is driven by the master (MOSI) and the other is driven by the slave (MISO). In the example of Figure 3b, the processor's SPI port is seamlessly connected to the AD1871's SPI module.



Figure 4 (a) Data signals are transmitted by the AD1871 using the I2S protocol
(b) The SPI 3-wire interface is used to control the AD1871

Audio codecs with separate SPI control ports allow the host processor to modify the ADC settings on the fly. In addition to mute and gain controls, one of the really useful settings on an ADC like the AD1871 is the ability to set a power saving mode. For battery powered applications this is often an essential feature.

DACs and Codecs
Interfacing an audio DAC to a host processor is exactly the same process as the ADC we just discussed. In a system using both an ADC and a DAC, the same bidirectional serial port can connect both.

However, if you are considering full-duplex audio, you are better off with a single-chip audio codec that can do both analog-to-digital and digital-to-analog conversions. A good choice for such a codec is the Analog Devices AD1836, which has three stereo DACs and two stereo ADCs, and can communicate over a variety of serial protocols, including I2S.

In this article, we have covered the basics of interfacing audio converters to embedded processors. In the second part of this article, we will describe the formats in which audio data is stored and processed. In particular, we will review the trade-offs in choosing the data word length. This is important because it determines the type of data used and can also avoid certain processors if the desired quality level is too high for a particular device to achieve. In addition, the choice of data word length helps to determine the trade-off between increasing dynamic range and increasing processing power.

Audio functions play a key role in embedded media processing. Although audio processing generally takes less processing power than video processing, they are equally important.

In the first of three parts of this article, we will explore how data is transferred from various audio converters (DACs and ADCs) to embedded processors. After that, we will explore some of the peripheral interface standards that are often used to connect to audio converters.

Converting between analog and digital audio signals

Sampling
All A/D and D/A conversions should follow the Shannon-Nyquist sampling theorem. In short, the theorem states that the rate at which an analog signal is sampled (the Nyquist sampling rate) must be equal to or greater than twice its bandwidth (the Nyquist frequency) in order to reconstruct the signal in the final D/A conversion. Sampling below the Nyquist sampling rate will produce aliasing, which is the low-frequency image of frequency components above the Nyquist frequency. If we take an audio signal that is bandwidth-limited to the range of 0-20kHz and sample it at a frequency of 2×20kHz=40kHz, then the Nyquist theorem ensures that we can perfectly reconstruct the original signal without any signal loss. However, sampling this 0-20kHz band-limited signal at any sampling rate below 40kHz will produce distortion due to aliasing. Figure 1 shows how sampling below the Nyquist sampling rate causes the signal to be represented incorrectly. When sampled at a frequency of 40kHz, the 20kHz signal is correctly represented (Figure 1a). However, the same 20kHz sine wave, when sampled at 30kHz, actually looks like an alias of the original sine wave at a lower frequency (Figure 1b).


Figure 1. (a) Sampling a 20kHz signal at 40kHz correctly captures the original signal.
(b) Sampling the same 20kHz signal at 30kHz captures an aliased signal (low-frequency ghosting).

However, no practical system samples at exactly twice the Nyquist frequency. For example, limiting a signal to a specified frequency band requires the use of analog low-pass filters, but these filters are never ideal. Therefore, the lowest sampling rate usually used to reproduce music is 44.1kHz rather than 40kHz, and many high-quality systems are sampled at 48kHz to capture a more realistic auditory experience in the 0-20kHz range.

Since speech signals are only a small part of the frequency range we can hear, the energy below 4kHz is enough to make the restored speech signal understandable. For this reason, telephone applications usually only use a sampling rate of 8kHz (=2×4kHz). Table 1 summarizes the sampling rates of some systems we are familiar with.

Table 1 Commonly used sampling rates

PCM Output
The most common digital representation of an audio signal is a PCM (Pulse Code Modulation) signal. In this representation, an analog amplitude is encoded with a digital quantity during each sampling period. The resulting digital waveform is a vector of decimated points that approximates the input analog waveform. All A/D converters have a finite resolution, so the converter introduces quantization noise that is inherent to digital audio systems. Figure 2 shows a PCM representation of an analog sine wave (Figure 2a) converted by an ideal A/D converter, with the quantization operation showing up as a "staircase effect" (Figure 2b). You can see that the lower resolution results in a poorer representation of the original waveform (Figure 3c).


Figure 2 (a) An analog signal (b) A PCM signal after digitization
(c) A PCM signal after digitization using a smaller number of bits of precision [page]

As a numerical example, let's assume that a 24-bit A/D converter is used to sample an analog signal that ranges from -2.828V to 2.828V (5.656 peak-to-peak). 24 bits have 224 (16777216) quantization levels. Therefore, the effective voltage resolution is 5.656V/16777216 = 337.1nV. In the second part of this article, we will see how the resolution of the codec affects the dynamic range of the audio system.

PWM Output
Pulse Width Modulation (PWM) is another modulation method different from PCM, which can directly drive the output circuit without any DAC assistance. This is particularly useful when a low-cost solution is required.

In PCM, the amplitude is encoded once per sampling period, whereas in PWM signals it is the duty cycle that describes the amplitude. PWM signals can be generated via general purpose I/O pins, or they can be driven directly using dedicated PWM timers found in many processors.

In order to achieve a reasonably good quality of PWM audio, the carrier frequency of the PWM should be at least 12 times the bandwidth of the signal, and the resolution of the timer (i.e. the interval time of the duty cycle) should be 16 bits. Due to the carrier frequency requirements, traditional PWM audio circuits have been used for narrow-band audio, such as subwoofers. However, with the current high-speed processors, it can be expanded to a wider audio spectrum.

The PWM stream must be low-pass filtered to remove the high frequency carrier. This is usually done with the amplifier circuit driving the speaker. Class D amplifiers have been used successfully in this configuration. When amplification is not required, a low-pass filter is sufficient for the output stage. In some low-cost applications, the sound quality is not so important, and the PWM stream can be connected directly to the speaker. In such a system, the mechanical inertia of the speaker cone acts as a low-pass filter that filters out the carrier frequency.

A brief background on audio converters

There are many ways to perform A/D conversion in audio ADC
. One traditional method is the successive approximation scheme, which uses a comparator to detect the comparison between the analog input signal and a series of intermediate D/A conversion outputs, and then obtains the final result.

But most ADCs today are sigma-delta converters. Rather than using successive approximation to produce very high resolution, these converters use a 1-bit ADC. To compensate for this reduced number of quantization levels, these converters oversample at a much higher frequency than the Nyquist frequency. Converting this oversampled 1-bit sample stream to a lower rate, higher resolution sample stream is done using digital filtering blocks within these converters to accommodate processing of traditional PCM streams. For example, a 16-bit 44.1kHz sigma-delta ADC can use a 64x oversampling ratio to produce a 1-bit sample stream at 2.8224MHz. A digital decimation filter is then used to convert this supersampled sample stream to a 16-bit sample stream at a rate equal to 44.1kHz.

By oversampling the analog signal, Σ-Δ ADCs relax the performance requirements of the analog low-pass filter used to limit the input signal bandwidth. These converters also have the advantage of spreading the output noise over a wider frequency spectrum than traditional converters.

In Part 2 of this article, we first explore the topic of dynamic range and accuracy, and then take a deeper look at data formats as they relate to audio processing.

Dynamic Range and Accuracy

You may have seen the dB specification, which is ubiquitous in today's market to describe various products. Table 1 lists several products and their signal quality in dB.

Table 1 Comparison of dynamic range of various audio systems

So what do these numbers actually mean? Let's start by defining some definitions. Consider Figure 1 as a "cheat sheet" for the following basic specifications.


Figure 1 Relationship between some important terms in audio system

The dynamic range of the human ear (the ratio of the loudest signal level to the quietest signal level) is about 120 dB. In a noisy system, the dynamic range is described as the ratio of the maximum signal level to the noise floor. That is,

Dynamic range (dB) = peak level (dB) – noise floor (dB)

The noise floor in a purely analog system comes from the electrical characteristics of the system itself. Digital audio signals built on top of analog systems also gain noise from the ADC and DAC, and can also gain noise from quantization errors introduced by the analog data sampling process.

Another important term is signal-to-noise ratio (SNR). In analog systems, SNR means the ratio of the nominal signal to the noise floor, where "line level" is the nominal operating level. For professional equipment, the nominal level is usually 1.228 Vrms, which converts to +4 dBu. The headroom is the difference between the nominal level and the peak level, which is the level at which signal distortion begins to occur. The definition of SNR in digital systems is slightly different, that is, SNR is defined as dynamic range.

Now that we understand dynamic range, we can begin to discuss how dynamic range is useful in practice. Without going into a lengthy derivation, let’s briefly explain what the 6 dB rule is. This rule is the key to the relationship between dynamic range and computational word length. The full derivation is shown in Equation 1, but a convenient notation is used here to mean that each additional bit of precision will increase the dynamic range by 6 dB. It should be noted that this 6 dB rule does not take into account the analog subsystems in the audio design, so the non-idealities of the sensors at the input and output must be considered separately.

Dynamic range (dB) = 6.02n + 1.76 ≈ 6n dB
where n = number of bits of precision
Formula 1: 6 dB rule

The “6 dB rule” states that the more bits we use, the better system quality we can achieve. But in reality, there are only a few real options available. Most devices suitable for embedded media processing come in three word lengths: 16-bit, 24-bit, and 32-bit. Table 2 summarizes the dynamic range of these three types of processors.

Since we are discussing the 6 dB rule, it is worthwhile to discuss some nonlinear quantization methods commonly used for speech signals. Telephone-quality linear PCM encoding requires 12 bits of precision. However, our ears are more sensitive to small changes in audio signals than large ones. Therefore, the linear PCM sampling method used in telephone communications uses too many bits. The logarithmic quantization used in the A-Law and μ-Law companding standards achieves the quality level of 12-bit PCM with only 8 bits of precision. To make our lives more convenient, some processor manufacturers have built the A-Law and μ-Law companding standards into the serial ports of their devices. This eliminates the need for the processor core to perform logarithmic calculations.

After reviewing Table 2, we recall again that the dynamic range of the human ear is approximately 120 dB. For this reason, 16-bit data representation is not very suitable for high-quality audio. Therefore, vendors introduced 24-bit processors that extended the dynamic range of 16-bit systems. These 24-bit systems were somewhat non-standard from a C compiler's point of view, so many audio designs in recent years have used 32-bit processing.

Table 2 Dynamic range of various fixed-point architectures

Choosing the right processor is not the end all, because the overall quality of an audio system is determined by the level of "minimum quality" components. In addition to the processor, the entire system includes some analog components such as microphones and speakers, as well as converters that convert signals between the analog domain and the digital domain. The analog domain is beyond the scope of this discussion, while audio converters involve the digital domain.

Let's say you want to use an AD1871 for audio sampling. The converter's data sheet explains it as a 24-bit converter, but its dynamic range is not 144 dB, but 105 dB. The reason for this is that the converter is not a perfect system, and the vendor only gives the useful dynamic range.

If you really wanted to connect the AD1871 to a 24-bit processor, your total system SNR would be 105dB. The noise floor would be 144dB-105dB=39dB. Figure 2 is a graphical representation of this. However, there is another component in digital audio systems that we have not yet discussed: the computation within the processor core.


Figure 2 The SNR of an audio system is composed of the SNR of its weakest component.

Passing data through the processor's computational units can potentially introduce a variety of errors. One of these is quantization error. This error can occur in a series of calculations that cause data values ​​to be truncated or rounded (up or down). For example, a 16-bit processor can add a vector of 16-bit data and store the result in an extended word length accumulator. However, when the value in the accumulator is ultimately written to a 16-bit data register, some of the bits are truncated.

Take a look at Figure 3 to see how calculation errors affect a real system. For an ideal 16-bit A/D converter (Figure 3a), the signal-to-noise ratio should be 16 x 6 = 96 dB. If there were no quantization errors, then 16 bits of calculation would be sufficient to maintain the SNR at 96 dB. 24-bit and 32-bit systems would, respectively, place 8 and 16 bits of dynamic range below the noise floor. Essentially, these extra bits are wasted.


Figure 3. (a) The allocation of extra bits for different word lengths in an ideal 16-bit 96 dB SNR system,
ignoring quantization error. (b)
The allocation of extra bits for different word lengths in an ideal 16-bit 96 dB SNR system, assuming quantization error.

However, all digital audio systems do introduce rounding and truncation errors. If we can quantify this error, such as determining it to be 18dB (or 3 bits), then it is clear that 16-bit calculations are not enough to maintain a system SNR of 96dB (Figure 3b). Another way to explain this is that the effective noise floor is increased by 18dB, so the overall SNR is reduced to 96dB -18dB = 78dB. This leads to the conclusion that the extra bits below the noise floor help to resolve the troubles that occur in the quantization.

Audio data format

There are many ways to represent data within a processor. The two main processor architectures used in audio processing are fixed point and floating point. Fixed-point processors are designed to operate with integers and fractions and typically natively support 16-bit, 24-bit, or 32-bit data. Floating-point processors offer very good performance and natively support 32-bit or 64-bit floating-point data types. However, these floating-point processors are generally more expensive and consume more power than their fixed-point counterparts, so all practical systems must find a balance between quality and engineering cost.

Fixed-point operations

Processors that perform fixed-point arithmetic typically use two's complement representation for signals. Fixed-point formats can represent signed and unsigned integers and fractions. The use of signed fractional formats on fixed-point processors is most common in digital signal processing. The difference between integer and fractional formats is the position of the binary point. For integers, the binary point is to the right of the least significant bit; for fractions, the decimal point is usually placed to the right of the sign bit. Figure 4a shows the formats for integers and fractions.


Figure 4 (a) Fractional and integer formats (b) IEEE 754 32-bit single-precision floating point format

Although the fixed-point convention simplifies numerical operations and saves memory, there is also a trade-off between dynamic range and accuracy. In applications where it is necessary to maintain high resolution while using a large numerical range, it is necessary to use a decimal point that can move according to the magnitude and exponent.

Floating-point operations

Using the floating point format, you can represent very large and very small numbers in the same system. Floating point numbers are very similar to scientific notation for rational numbers. Floating point numbers are described by a mantissa and an exponent. The mantissa determines the precision, while the exponent controls the dynamic range.

There is a standard that governs floating-point operations on digital machines. This standard is called IEEE-754 (Figure 4a); for 32-bit floating-point numbers it can be summarized as follows. Bit 31 (MSB) is the sign bit, where a 0 indicates a positive sign and a 1 indicates a negative sign. Bits 30 to 23 are the exponent field (exp_field) representing the power of 2, offset by 127. Finally, bits 22 to 0 represent the mantissa of the fraction. Hidden bits are usually 1s to the left of the decimal point.

The value of a 32-bit IEEE floating point number can be expressed using the following equation:

(-1)sign_bit × (1.mantissa) * 2(exp _field-127)

With an 8-bit exponent and a 23-bit mantissa, IEEE-754 achieves a balance between dynamic range and precision. In addition, the IEEE floating point library includes support for additional features such as ??, 0, and NaN (not a number).

Table 3 shows the minimum and maximum numbers that can be achieved from common floating-point and fixed-point types.

Table 3 Comparison of dynamic range of various data formats

Emulation on 16-bit architectures

As we explained earlier, 16-bit processing does not provide sufficient SNR for high-quality audio, but this does not mean that you should not use a 16-bit processor for your audio system. For example, it is easier to program an algorithm to maintain the original 32-bit data style using a 32-bit floating-point machine; but a 16-bit processor can also maintain 32-bit integrity through very low-cost emulation. Figure 5 shows some of the possibilities when choosing data types for an embedded algorithm. [page]


Figure 5 Depending on the goal of an application, there can be many data types that meet system requirements.

In the remainder of this section, we describe how to implement the functionality of floating-point and 32-bit extended-precision fixed-point formats on a 16-bit fixed-point machine.

Floating-point emulation on fixed-point processors

On most 16-bit fixed-point processors, IEEE-754 floating-point functionality is provided through library calls to C/C++ or assembly language. These libraries emulate the required floating-point processing by using fixed-point multiplication and arithmetic logic. This emulation requires additional processing cycles to complete. However, as the clock of fixed-point processor cores moves into the 500 MHz - 1 GHz range, the extra cycles required to emulate IEEE-754 compliant floating-point operations are less significant.

To reduce the complexity of the calculation, a "relaxed" version of IEEE-754 can be used. This means that some standard features such as ? and NaN are not implemented in floating-point operations.

A further optimization is to use a more native type for the mantissa and exponent. For example, the Blackfin fixed-point processor architecture from Analog Devices has a register set consisting of sixteen 16-bit registers that can also be used as eight 32-bit registers. In this configuration, two 32-bit registers can fetch operands from all four half-registers per core clock cycle. To optimize the use of the Blackfin register set, a double-word format can be used. Thus, one word (16 bits) is reserved for use as the exponent, and another word (16 bits) is reserved for the fraction.

Double-precision fixed-point simulation

For many applications, 16-bit fixed-point data is insufficient, and the amount of computation required is too great if emulated floating-point operations are used. For these applications, extended precision fixed-point emulation may be sufficient to meet the system requirements. Using a high-speed fixed-point processor will ensure that the amount of computation required is significantly reduced. Two common extended precision formats used in audio are 32-bit and 31-bit fixed-point representations.

32-bit-accurate emulation

32-bit arithmetic is a natural software extension of 16-bit fixed-point processors. For processors whose 32-bit registers can be accessed in two 16-bit halves, these halves can be combined to represent a 32-bit fixed-point number. The hardware architecture of the Blackfin processor allows single-cycle 32-bit addition and subtraction.

For example, when a 32-bit multiplication is performed with accumulator iterations (as in some algorithms we will discuss shortly), we can achieve 32-bit precision with only 16-bit multiplications in 3 cycles. Each of the two 32-bit operands (R0 and R1) can be split into two 16-bit halves (R0.H / R0.L and R1.H / R1.L).

It can be easily seen from Figure 6 that when using the instruction combination of the 16-bit multiplier to simulate the 32-bit multiplication R0 x R1, we need the following operations:


Figure 6 Implementing 32-bit multiplication using 16-bit operations

* Four 16-bit multiplications to produce four 32-bit results

1. R1.L x R0.L
2. R1.L x R0.H
3. R1.H x R0.L
4. R1.H x R0.H

* Three operations to maintain the position of the digits in the final result (the sign >> indicates a right shift). Since we are doing fractional arithmetic, the result is 1.63 (1.31 x 1.31 = 2.62, with a redundant sign bit). In most cases, this result can be truncated to 1.31 so that it can fit into a 32-bit data register. Therefore, the result of the multiplication should be based on the sign bit, or based on the most significant bit. This way, those least significant bits on the right can be safely truncated during the truncation operation.

1. (R1.L x R0.L) >> 32
2. (R1.L x R0.H) >> 16
3. (R1.H x R0.L) >> 16
The final expression for a 32-bit multiplication
is ((R1.L x R0.L) >> 32 + (R1.L x R0.H) >> 16) + ((R1.H x R0.L) >> 16 + R1.H x R0.H)

In the Blackfin architecture, these instructions can be executed in parallel to achieve an effective rate of completing a 32-bit multiplication in three cycles.

31-bit-accurate emulation

We can reduce the computation time of fixed-point multiplications requiring up to 31 bits of precision to 2 cycles. This technique is particularly attractive for audio systems, which typically require at least 24 bits of representation, and 32 bits of precision may be overkill. Using the "6 dB rule," a 31-bit accurate simulation still maintains a dynamic range of approximately 186 dB, which is still plenty of margin even after accounting for all quantization effects.

From the multiplication block diagram in Figure 6, it is obvious that the least significant half-word multiplication R1.L x R0.L does not contribute much to the final result. In fact, if the result is truncated to 1.31, then this multiplication only affects the least significant bit of the result of 1.31. For many applications, the loss of precision caused by this one bit is balanced by the reduction of one 16-bit multiplication, one shift, and one addition to speed up the 32-bit multiplication.

The expression for 31-bit exact multiplication is
((R1.L x R0.H) + (R1.H x R0.L) ) >> 16 + (R1.H x R0.H)

In the Blackfin architecture, these instructions can be executed in parallel to achieve an effective rate of completing a 32-bit multiplication in 2 cycles.

So, that’s the “scoop” on data formats used in audio processing. In the final section of this article, we’ll look at some strategies for developing embedded audio applications, focusing primarily on data transfer and building blocks in common algorithms.

Keywords:Embedded Reference address:Embedded Audio Processing Basics

Previous article:Protecting System Firmware and Other Intellectual Property with RFID
Next article:GPS and embedded database

Recommended ReadingLatest update time:2024-11-17 03:03

Raspberry Pi edge AI camera unveiled at Embedded World 2024
Raspberry Pi demonstrated an AI-enabled camera module at Embedded World 2024, developed in partnership with Sony, featuring on-module processing and compatible with the Raspberry Pi range. The Raspberry Pi AI Camera simplifies complex AI tasks at the edge. As artificial intelligence becomes more prevale
[Embedded]
Raspberry Pi edge AI camera unveiled at Embedded World 2024
Latest Microcontroller Articles
  • Download from the Internet--ARM Getting Started Notes
    A brief introduction: From today on, the ARM notebook of the rookie is open, and it can be regarded as a place to store these notes. Why publish it? Maybe you are interested in it. In fact, the reason for these notes is ...
  • Learn ARM development(22)
    Turning off and on interrupts Interrupts are an efficient dialogue mechanism, but sometimes you don't want to interrupt the program while it is running. For example, when you are printing something, the program suddenly interrupts and another ...
  • Learn ARM development(21)
    First, declare the task pointer, because it will be used later. Task pointer volatile TASK_TCB* volatile g_pCurrentTask = NULL;volatile TASK_TCB* vol ...
  • Learn ARM development(20)
    With the previous Tick interrupt, the basic task switching conditions are ready. However, this "easterly" is also difficult to understand. Only through continuous practice can we understand it. ...
  • Learn ARM development(19)
    After many days of hard work, I finally got the interrupt working. But in order to allow RTOS to use timer interrupts, what kind of interrupts can be implemented in S3C44B0? There are two methods in S3C44B0. ...
  • Learn ARM development(14)
  • Learn ARM development(15)
  • Learn ARM development(16)
  • Learn ARM development(17)
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号