Improving the efficiency of wireless sensor nodes using voltage control
[Copy link]
Wireless sensor nodes are powering the growth of the Internet of Things (IoT). Two of the most important aspects of wireless sensor nodes are that they should react to changes in their local environment and be able to efficiently store a single battery charge for years - potentially the entire expected lifetime of the sensor. To ensure that the sensors react only to important changes, increasingly complex software will be downloaded to them. This, in turn, requires efficient processors, such as those based on the 32-bit ARM Cortex-M architecture, or for simpler sensors, modified versions of 8-bit cores such as the 8051. System-level power consumption is determined by many variables that go beyond the power efficiency of the processor itself. To improve efficiency, low-energy MCUs incorporate many intelligent peripherals to control hardware on behalf of the core processor. These peripherals operate at different times and power requirements can vary on a millisecond-to-millisecond basis. They require flexible power architectures to support this. The reason for the need to control various parts of the system or even peripherals integrated into the MCU itself is to support low duty cycles. The duty cycle determines how much time an MCU processor spends awake and processing data during its lifespan and how much time it spends powered off and sleeping. A low duty cycle is important because processors within a system are sleeping almost all of the time to save energy. A low duty cycle strategy has proven to be very effective in the design of an electric meter where the processor core may sleep for 99% of its entire lifespan. It wakes up only for data coming in from sensors, usually at a scheduled time or in response to an unscheduled interrupt. Smart peripherals support this functionality by checking the incoming data without waking up the processor. Only when a threshold is exceeded does the peripheral trigger an interrupt, allowing the processor to process the change in the environment. This strategy ensures that only significant changes are processed. Those that mean very little change can be queued in memory and processed when the processor core is woken up for other reasons. For example, in a metering application, a register encoder records the flow of natural gas or water as a series of pulses. Without hardware support, the MCU's processor must wake up and sample the state of an I/O pin to determine if the switch is open or closed. If it were a physical reed switch, additional processing would be required to debounce the switch and manage the pull-up resistors to check if it is a valid pulse and minimize the current consumption through the closed switch. An energy optimization approach is to use a dedicated input capture timer that can be operated automatically when the device is in sleep mode. The switch closures can be accumulated in hardware registers with little software intervention required. Features such as switch debounce, pull-up resistor management, and self-calibration can be integrated directly in hardware. With two timer inputs, a quadrature decoding function can be supported to determine the flow direction. This provides a backflow detection function as well as an anti-tamper function, both of which are used to trigger an interrupt that lets the processor react and send a warning message. A dedicated low-power input capture timer consumes only 400 nA at 3.6 V, even with a sampling rate of up to 500 Hz, which is more than 1 μA if executed in software. Another example is preparing a message for RF transmission. The data must be manipulated multiple times. The 20-byte message payload that needs to be transmitted from the meter to the collector is temporarily stored in SRAM once it is prepared by software. To ensure its integrity once it is received at the destination, a cyclic redundancy check (CRC) is calculated and appended to the end of the message. The entire message then needs to be encoded using a scheme (e.g. Manchester, 3:6) to increase transmission reliability. This encoded message is passed to the radio transceiver via the serial interface. A dedicated packet processing engine (DPPE) can be used to perform the CRC, encoding, and relaying to the transceiver far more efficiently than software, allowing the processor to go to sleep while it is occurring. Using the DPPE not only reduces the time required to perform functions, but also reduces current consumption during this time, as the flash memory, which tends to require a lot of current, is not accessed. Instead, all operations are on local memory. The end result can be a 90% reduction in power in active mode. Figure 1: Comparison of execution time and current consumption of software and DPPE for packet CRC and encoding tasks. Providing designers with a number of ways to reduce lifetime energy consumption, MCUs designed for such applications offer a variety of sleep modes that can gradually power down the various component cores, storing their states in nonvolatile memory or dedicated low-leakage registers until almost all of the device is powered down. For example, deep sleep allows all but the core peripherals, such as the real-time clock, to be powered down, which also eliminates the need for circuitry such as phase-locked loops that drive on-chip logic clocks. Low-energy sleep modes greatly extend battery life in sensor node applications, but power consumption can be further reduced when the MCU is active. During activity, the power consumption of any logic circuit is given by the formula CV2f, where C is the total capacitance of the circuit paths within the device, V is the supply voltage, and f is the operating frequency. To maximize system design flexibility, the process technology used by the MCU supports voltages up to 3.6 V. However, due to the energy consumption advantages of operating at a lower voltage, the internal circuits will use a supply that can be set to 1.8 V or even less. Due to their relative simplicity in implementation, most MCU vendors use linear regulators, usually based on a low-dropout (LDO) design, to convert the input voltage from the battery pack to the required internal power supply; but the simplicity of linear converters is associated with low efficiency. Multiple LDOs are required within the system to supply different peripherals outside the MCU, such as an RF transceiver, each of which may be powered directly by the battery. The problem with this structure is that if, for example, the battery is supplying 3.6 V to an RF transceiver that is operating at an internal voltage of 1.8 V, the conversion efficiency is only 50%. When the input voltage drops to 1.8 V, a 1.8 V to 3.A 6 V RF transceiver will improve its voltage conversion efficiency. This effect needs to be considered for each peripheral outside the MCU.
Figure 2: Energy efficiency comparison of LDO- and DC/DC converter-based MCU designs. Silicon Labs’ approach MCUs such as the C8051F960 or SiM3L1xx will employ a switching DC/DC converter. The use of switching conversion results in higher efficiency. This technique uses pulse-width modulation (PWM) to feed charge packets to an output circuit that uses a combination of inductors and capacitors to smooth the packets into a constant output voltage and current suitable for the load. An off-chip DC/DC converter can perform this function, but this increases the system’s component count. Systems may not be able to afford this when they are size-challenged, which is often the case with sensor nodes. In addition to reducing the active-mode current of the MCU, an integrated, high-efficiency DC/DC converter helps reduce the current demand of other parts of the system. Overall power consumption can be minimized by configuring the DC/DC converter’s output voltage to the lowest acceptable setting for external peripherals controlled by the MCU and fed from the MCU’s voltage output line. For an external RF transceiver, the external voltage output can be set to 1.8 V and reduce its overall current demand. Integrated DC/DC converters offer more opportunities for circuit-level energy optimization Designers can trade off voltage against performance to suit the target application. For example, the EFM Pearl Gecko’s energy management unit provides programmable control of the on-chip regulator. This makes it possible to shut down the regulator when the battery voltage drops enough to affect conversion efficiency, and it makes more sense to drive the MCU circuitry directly from the battery without intermediate conversion. Another example of use is during sleep, where a simpler, low-current converter only drives the real-time clock to ensure it provides a wake-up interrupt at the appropriate time. Figure 3: The internal voltage architecture of the EFM32 Pearl Gecko shows bypass lines. Some circuits bypass the on-chip DC/DC converter to avoid the problem of performing double conversion. For example, flash memory blocks often contain charge pumps to provide the required higher voltage to the memory lines during write operations. Analog modules can also be driven by direct battery supply or by a DC/DC converter, depending on the application needs. For example, connecting directly to a battery rather than a switching regulator will help reduce noise in the analog circuits. Therefore, MCUs with integrated power conversion capabilities, such as those in the Silicon Labs family, can provide this capability to adjust system-level energy consumption, providing more efficient sensor nodes and longer battery life.
|