2694 views|0 replies

2

Posts

0

Resources
The OP
 

DSP system power management technology [Copy link]

In portable applications, low power consumption is a key differentiator, determining product size and operating time. For example, if you choose a portable DVD player as a distraction during a cross-ocean flight, battery life will be one of your top criteria. In this article, we will focus on many of the more commonly used software-based techniques. We will begin by explaining some of the power management techniques that can be used in embedded systems and discuss the many challenges they encounter in real-time applications. Power efficiency is determined by hardware design and component selection, as well as by software-based runtime power management techniques. The second half of this article will focus on showing how to sub-integrate techniques into real-time operating systems (RTOS) for digital signal processors (DSPs), allowing application developers to choose the specific technology that meets their application requirements. We will use the Texas Instruments (TI) DSP/BIOS operating system as an example to show how to implement runtime power management software techniques.
  Runtime Power Management Techniques

  Although we discuss some specific power management techniques that can extend a standard multithreaded operating system (OS), it should be emphasized that using a preemptive multithreaded OS itself can often achieve significant power savings. Real-time applications that do not utilize an OS often require the application to periodically poll an interface to detect events. This is quite inefficient from a power perspective. Using an OS allows applications to take advantage of an interrupt-driven model, where program execution begins when needed in response to external events. In addition, when an OS-based application has nothing to do, it enters an idle thread, at which point a low-power operating mode can be enabled to reduce power consumption.

  However, an OS needs to provide much more complex power management support than simply enabling an idle mode for a DSP core. In practice, a large amount of power is consumed by peripherals, either on-chip or external, and memory also consumes a large amount of power. It is critical that any power management approach include support for managing peripheral power consumption. Furthermore, the quadratic relationship between voltage and power consumption means that it is more efficient to execute code at a lower clock rate that requires a lower voltage, rather than executing at the highest clock rate and then going to idle. We will outline the many opportunities for implementing power management support in the operating system:

  System Power-Up Behavior: The processor and its on-chip peripherals are generally fully powered up at the highest clock rate. Inevitably, some resources are not yet required to be powered up, or are never used during the application. For example, an MP3 player rarely uses its USB port to communicate with the PC. At startup, the operating system must provide the application with a mechanism to regulate the system so that unnecessary power consuming devices are turned off or idle.

  Idle Mode: Active power consumption in CMOS circuits occurs only when the circuits are clocked. By turning off unnecessary clocks, unnecessary active power consumption can be eliminated. Most DSPs incorporate mechanisms to temporarily stop active CPU power consumption while waiting for external events. "Idle" of the CPU clock is usually triggered by a "stop" or "idle" instruction, which is called when the application or operating system is idle. Some DSPs partition multiple clock domains, and these domains can be individually idled to terminate active power consumption in unused modules. For example, in TI's TMS320C5510 DSP, six clock domains can be selectively idled, including the CPU, cache, DMA, peripheral clocks, clock generators, and external memory interfaces.

  In addition to supporting the idle DSP and its on-chip peripherals, the operating system must also provide mechanisms for idling external peripherals. For example, some codecs have built-in low-power modes that can be activated. One challenge we face is peripherals such as watchdog timers. Typically, watchdog timers should be serviced at predefined intervals to avoid their activation. Thus, power management techniques that slow down or terminate processing can inadvertently cause application failures. Therefore, the OS should enable applications to disable such peripherals during sleep mode.

  Power-off: Although idle mode eliminates active power consumption, static power consumption occurs even when the circuit is not switching, mainly due to reverse-bias leakage. If a system includes a block that does not need to be powered at all times, then we can reduce power consumption by having the operating system power the subsystem only when it is needed. Until now, embedded system developers have put very little effort into minimizing static power consumption because CMOS circuits consume very low static power. However, new, higher performance transistors have significantly increased current leakage, requiring new attention to reducing static power consumption and more complex sleep modes.

  Voltage and frequency scaling (frequency scaling) Effective power consumption scales linearly with switching frequency, but with the square of the supply voltage. Running an application at a lower frequency does not save much power compared to running it at full clock frequency and going idle. However, if the frequency is compatible with the lower operating voltage available on the platform, then we can potentially achieve significant savings by reducing the voltage, precisely because of the above square relationship. This has also led to a lot of academic research on how to save power through voltage scaling.

  Although voltage scaling is a potentially very attractive way to reduce power consumption, we should be careful when using it in real-world applications. This is because we need to fully understand whether the system can still meet its real-time deadlines. Reducing the voltage (and therefore the CPU frequency) will change the execution time of a given task, potentially causing the operator to miss a real-time deadline. Even if the new frequency is compatible with the deadline, problems may arise if the latency of the switching frequency and voltage is too long. Factors that affect latency include the following:

  * The time required to reprogram the voltage regulator

  * Whether the DSP can continue to execute any other code during the voltage change

  The need to reprogram peripherals, such as serial ports or external memory interfaces, interface with peripherals that receive different clock sources. For example, a reduction in the CPU clock rate may require a reduction in the number of wait states for accessing external memory.

  The possibility of reprogramming the timer used to generate the operating system clock tick will affect the absolute correctness of the operating system's time base.

  Although the actual latency of voltage scaling will vary depending on the DSP selected and the number of peripherals that need to be reprogrammed, in many systems, the latency is only a few hundred microseconds or even a few milliseconds. In many real-time applications, this will make voltage scaling impractical. Despite the above weaknesses, it is still possible to take advantage of voltage scaling for applications that require full processing power only in certain predictable modes. For example, a portable music player may use a DSP for both MP3 decoding and general control processing required by the user interface. If only the MP3 decoding requires the full clock rate, the DSP can reduce its voltage when performing user interface functions and only operate at full power when music data begins to flow to the DSP.

  Implementing Power Management in a DSP RTOS

  A subset of the power management techniques described above are already included in the DSP's RTOS. To better illustrate how to build power management into the RTOS, we will discuss the implementation in more detail.

  As we have seen in the previous discussion, the way a particular system reduces power consumption depends primarily on the nature of the application and the options provided by the DSP and surrounding peripherals. Therefore, the key design goals are efficiency and flexibility. Although the implementation described below is in terms of a specific RTOS, the concepts can be easily applied to other operating systems or even in an application environment without an operating system. Requirements for

  a Power Manager (PWRM)

  The key requirements for the first power manager implementation are as follows:

  * Power management actions are application-triggered, not OS-triggered. The primary decision to change the dsp operating mode or functionality is made by the application and driven by pwrm calls. But the OS can (and should) automatically take actions to save power, as long as the action does not affect the application.
This post is from Power technology
 
 

Guess Your Favourite
Just looking around
Find a datasheet?

EEWorld Datasheet Technical Support

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京B2-20211791 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号
快速回复 返回顶部 Return list