Reducing power loss in data centers using digital power supplies and optimized power devices

Publisher:bln898Latest update time:2012-10-24 Source: 电子发烧友 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

Energy efficiency has become one of the main factors that determine the success of electronic component, subsystem and system design. In the past few years, computing and communication equipment manufacturers have been driving technical specifications internally and promoting these technical specifications externally to users, such as computing power per watt. Just a few years ago, these companies emphasized computing power per euro.

The basic reasons for the rapid move towards increased efficiency are as follows:

(1) Formulated global energy-saving technical standards such as the EU Code of Conduct and Energy Star, which have been recognized by the market;

(2) The electricity cost of large facilities is extremely high and has become an explicit cost of ownership;

(3) Limitations on the power available from existing facilities;

(4) As the scale of facilities increases, the cost also increases.

With the rapid increase in Internet bandwidth, Internet users, and Internet devices, one of the businesses most affected is the data center. While trying to add servers to handle the large amount of work, data centers have also adopted various methods to improve efficiency.

Data centers are often rated based on how efficiently they use electricity, or power usage effectiveness (PUE). While many newly built data centers can achieve a PUE of 1.2, many other existing data centers have a PUE between 3 and 7, meaning that for every 1W of power used for computing, up to 6W is consumed for cooling, lighting, and power transmission [3].

Looking at ratings such as PUE and the way server CPUs and memory subsystems are powered, it becomes clear that saving as much power as possible in the server CPU can save power throughout the data center.

Figure 1. Typical server power transmission model.

A simple model of the server power transmission process is shown in Figure 1. Assuming the industry average PUE is 3.0, it is clear that for every 1W of power saved by the server itself, the data center can save another 2W of power. There are many ways to save the power consumed by the server itself, but the most important methods are as follows:

(1) Reduce the power of the CPU and DDR memory used in the server;

(2) Improving the efficiency of voltage regulator (VR) solutions that power the CPU and DDR memory;

(3) Reduce the power supplied to the rest of the board, including the point-of-load converters that provide most of the system voltage rails.

Modern CPUs used in servers have made great progress in power optimization methods, but it is still the single largest power load on the server motherboard, followed by memory. Data centers still utilize high-end CPUs to support the data traffic and computing power required by the market. Therefore, the industry focuses on improving the efficiency of VR solutions to reduce power in servers and the entire data center.

To quantitatively understand the effect of VR efficiency improvement, a typical dual-processor server model was built, with each processor occupying 2 channels of DDR memory. A typical fully loaded server rack will use up to 9.5kW of power just to power the CPU and DDR memory (130W CPU × 2 = 260W + 60W DDR memory bank × 4 = 240W × 19 2U servers in a fully loaded rack). In many existing systems, the efficiency of VR solutions (converting the server's 12V voltage to CPU voltage or DDR voltage) is estimated to be 85% [2]. In this case and at a PUE of 3.0, 5.0kW of power is wasted per rack due to VR inefficiency.

The relationship between the improvement of VR solution efficiency and the saved power in this model is shown in Figure 2. It can be seen that in this model, every 1% increase in VR efficiency can save nearly 400W of power.

Figure 3. Multiphase VR solutions are capable of delivering high current at high efficiency.

Providing such high current levels at low voltages to server CPUs or DDR memory banks requires a multiphase solution, as shown in Figure 3. Multiphase solutions have been used for many generations of servers, but new solutions using digital control and power management devices can provide the high efficiency required by today's new servers. Compared to an average efficiency of 85%, the new solution can achieve a peak efficiency of 93% or higher and a full load efficiency of more than 90% [4]. As can be seen in Figure 2, more than 1kW of power can be saved per rack simply by implementing this solution.

Figure 4. Multiphase efficiency of IR’s multiphase solution using the IR3550 PowIRstage and CHiL digital control.

As shown in Figure 4, the IR solution utilizes a combination of digital power technology with dynamic phase control and variable gate drive, along with the high-efficiency PowIRstage solution IR3550, to greatly improve efficiency.

Dynamic phase control is a feature of digital power control ICs that accurately measures load current and uses user-defined thresholds to turn 1 or more phases on or off to maximize efficiency. With 4 phases, if only 1 phase is needed at low current, the switching losses of the remaining 3 phases are wasted. Turning off a phase improves efficiency. Similarly, as the current increases, the other phases are turned on. This technique is often called phase shedding, but dynamic phase control is actually much more complex. By measuring the average load current and shedding phases, IR achieves higher efficiency during phase shedding. However, the CPU inside a server can increase current very quickly, often exceeding 100A. If the controller used the same averaging technique to increase the phase, there is a high probability that the system will fail because 1 phase may be carrying more than 100A of current. And, to ensure the stability of the control loop, no matter how many phases there are, the internal digital control loop is automatically stabilized, which is something that previous analog technology could not achieve.

Figure 5. When the gate drive value (VGD) changes, RDSon changes, but the switching loss decreases.

The effect of variable gate drive is shown in Figure 5. If the current in any given phase is low, the gate drive voltage can be reduced to reduce the gate drive losses for that phase. The trade-off is a slightly increased RDS(on) value, and therefore higher conduction losses for that phase. However, if the right values ​​are chosen for the chosen MOSFET, the overall power loss can be reduced. If the phase current is high, the gate drive voltage is increased to reduce RDS(on) and conduction losses. IR’s CHIL? digital control technology allows us to optimize these parameters for the MOSFET, number of phases, and current level per phase to achieve the maximum efficiency gain with server VR solutions.

Reference address:Reducing power loss in data centers using digital power supplies and optimized power devices

Previous article:Digital power technology and application advantages
Next article:Typical isolated digital power solution

Latest Power Management Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号