Analysis of power supply issues in data center design

Publisher:陈书记Latest update time:2012-02-05 Source: 21ic Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

1. Too much electricity

When systems engineers design the power supply for servers throughout the company, they build a tunnel the same way civil engineers design a tunnel to accommodate the maximum amount of traffic. Even though most of the traffic in the tunnel is just commuter cars, engineers still make sure the tunnel can accommodate large trucks or tanker trucks.

The same is true for servers. Power requirements are usually specified based on the maximum system configuration and load required. But system engineers don't have to work that hard. They can configure different power requirements for different setups. Naturally, this will cost more money, so they tend to configure the maximum power supply in all wiring models regardless of the actual power used.

Take the servers shown in the figure as an example. The two servers are from the same series, WasteTech's Escalenteline. The WasteTech's Escalente5000SUX on the left is equipped with enough hardware and only requires 60W of power. The 6000SUX on the right consumes 540W of power after installing enough hardware in its highest configuration. However, the two servers use the same power supply. Although the current power supply enables the 6000SUX to obtain enough power, it provides too much power for the server that consumes less power. This excessive power supply causes waste.

Figure 1: Servers with different power requirements are often equipped with the same power supply, resulting in inefficient energy use.

But the problem isn't limited to the same family of servers. Oftentimes, vendors choose a single power supply that works for all devices. Again, the problem is waste. While it's cheaper for vendors to use the same power supply for all devices, you're ultimately responsible for the inefficiency of the power supply. Worse still, the environment pays the price, too, because each device's inefficient use of power creates more carbon emissions.

The good news is that many companies like Dell offer different power configurations, allowing consumers to choose the right power supply for each server based on its operating conditions. This will significantly reduce energy consumption, especially when this configuration method is used throughout the server's entire service life.

2. Efficiency

The efficiency of a power supply is usually calculated by dividing the output DC power by the input AC power. If a power supply can produce more DC power using less AC power, then using more AC power will save energy than using DC power. The formula is as follows

Efficiency = (output DC) / (input AC)

For example, if a power supply takes in 300W and outputs 200W, the calculation is simple: just divide 200 by 300.

200/300=66%

The higher the efficiency, the better. In this example, 66% is not good. Generally speaking, a high-quality power supply is 75% to 85% efficient. The remaining 34 in this example is not just lost, but converted into heat, which you have to spend money to cool.

When power efficiency and system cooling are taken into account, the power consumed by computers to work effectively may be less than 50% of the total power to drive the system. That means more than half of the energy consumption has become a bottomless pit that depletes the return on investment (RDI), and this increased operating cost has no benefit to customers.

Machines of different models and uses can use different types of power supplies.

3. Load

In terms of efficiency, servers are least efficient when they are idle. That's not to say they use more electricity, they just use it less efficiently. Just like a motorcycle, when you sit there and step on the accelerator, you are using gas but not moving. But when you get on the driveway and eventually get somewhere, the proportion of energy wasted is less than when you left the car idle. Now, think about a server that is minimally configured but uses a large power supply. That power supply is going to generate a lot of energy that is not being used.

Manufacturers are trying to address this problem, but as we’ve seen, it will require more upfront investment, especially using expensive recyclable materials.

4. Redundancy

Data centers often have redundant power supplies for obvious reasons: if one power supply fails, there are others to keep the servers running. In fact, in large data centers, the power supplies are on different power grids, so if the AC power fails on one grid, the other power supplies and servers will continue to work.

Redundant power supplies are good for keeping equipment up and running, but not for energy efficiency. For example, if a server requires 200W to run, but only has one 800W power supply, then the server can only use 25% of the power supply. If a redundant power supply is added, the energy use is split between the two supplies, and only 100W can be obtained from each power supply. At this time, it will drop from 25% load and 83% efficiency to 12.5% ​​load and 65% efficiency.

The problems in data centers all boil down to a question of balance. Which is more important, system availability or reducing energy usage and costs?

Reference address:Analysis of power supply issues in data center design

Previous article:Design of H-bridge low-inductance laminated busbar for high-power converter system
Next article:A power supply system solution for argon arc welding machine

Latest Power Management Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号