Advertisement

48 V: the new standard for high-density, power-efficient data centers

Data centers are leading the way to change from 12 V to 48 V to reduce energy losses by a startling 30%

ROBERT GENDRON
Vice President, Vicor
www.vicorpower.com

Recent announcements by Google at the OpenPOWER Summit and the Open Compute Project (OCP) U.S. Summit promote 48-V server and distribution infrastructure as the new standard for data center power. The evolution from legacy 12-V server racks to 48-V racks is expected to reduce energy losses by over 30%, highlighting the clear trend toward high-density data center racks that improve energy conservation. This development has forced a rethink of data center energy use and power delivery from site entry to processor.

Data load trends

Global IP traffic, which is expected to reach 72 exabytes/month when 2015 figures are tallied, is on track for a five-year compound annual growth rate (CAGR) of 23% from 2014 to 2019 (See Fig. 1 and Reference 1 ). By comparison, this is more than four times the projected CAGR for global economic output over the same period (see Reference 2 ).  

fapo_Vicor_ 48V _Figure1_aug2016

Fig. 1: Global IP traffic for 2014 and estimates for 2015 through 2019. Data source: Cisco Systems. Graphic courtesy JAS Technical Media; used by permission.  

Although mobile traffic accounts for the smallest fraction of the global total, its growth rate is 2 ½ times that of both the total and the fixed-IP fraction and 4 ⅓ times the managed-IP portion. If current forecasts hold, mobile traffic should exceed fixed IP by 2020, driven largely by video-content demand and the expanding availability of high-speed 4G/LTE mobile networks. Also noteworthy is that data to mobile devices offloaded to Wi-Fi networks does not appear in the mobile-traffic statistics, but rather as a (growing) portion of the fixed-IP fraction (see Reference 3 ).

Internet of Things (IoT) deployments have yet to appear in traffic projections. However, Gartner forecasts the IoT installed base will exceed 20 billion nodes by 2020 (see Reference 4 ). Even if IoT applications mostly depend on relatively short data transfers, the sheer number of data transactions may appear in server-load statistics disproportionate to their traffic.

Data center trends

To accommodate this growing traffic demand, data center operators aren’t just replicating existing server arrays. Instead, they find economic advantage in embracing technologies that increase IT functional density and, as a direct result, technologies that deliver greater power and cooling densities.

For example, a new Intel chip set provides sufficient density to deliver 50 kW to 60 kW of compute power in a standard 40-U to 50-U rack footprint (see Reference 5 ). This high level of compute density challenges traditional power sourcing, distribution, conversion, and backup schemes.

At the site level, many new large-scale data centers either supplement utility power with on-site solar- or wind-generation facilities or operate, in part, with large energy contracts from nearby renewable energy providers. In fact, site selection criteria for large data centers now include assessment of nearby renewable energy resources. In addition to solar and wind, these can include geo-thermal or hydropower — the preferred power source for energy-intensive industries in the past, such as chemical processing and aluminum refining.

Within the site, power distribution may be three-phase ac or, increasingly, 380 Vdc (HVDC), which eliminates phase balance issues and simplifies connections to renewable energy and backup sources. HVDC power distribution also eliminates power-factor-correction (PFC) stages at each server’s supply and improves distribution efficiency. For example, Nippon Telegraph and Telephone (NTT) reported on a 4-mW HVDC rectifier system formed of eight 500-kW modules that improves efficiency of HV power distribution by 20% (see Reference 6 ).

Transitions from site-wide to rack-level facilities

High-density server racks stress the limits of some traditional site-wide services. For example, at these densities, traditional floating-floor forced-air cooling is less practical than well-known but until-now-less-deployed alternatives, such as water-cooled rear-door heat exchangers (RDHXs) (see Reference 7 ). RDHXs are gaining popularity for their high capacity, efficiency, improved scalability, and for their ability to monitor and modulate cooling performance for individual racks in response to contained servers’ workload.

HVDC power distribution within the site coupled with increased per-rack dissipation invites replacing centralized backup power with a distributed model. For example, many new designs locate backup battery stacks within each rack. Similar to the case with RDHX cooling, this transition from a site-wide facility to a rack-level feature extends the rack as a unit of compute power further, which enhances scalability, increases reliability, and improves energy efficiency. This arrangement also eliminates UPS units and their conversion losses while providing fault isolation within the backup power system to, at most, an individual rack.

Server rack power trends

The strong trend in large electronic system-level power distribution crosses several sectors, including telecom, data centers, industrial, aero, and lighting: 48 V is replacing 12 V as the preferred low-voltage power distribution standard for new designs. Google’s recent donation of a 48-V data center rack specification to the Open Compute Project affirms this trend and speaks to the clear efficiency benefits that Google has achieved with 48 V.

As per-rack dissipation has increased, current draws exceed the capacity of 12-V in-rack distribution feeds. A 2013 NTT study shows that the practical limit for 12-V lines is about 2.5 kW/m. The same study suggests that 48-V feeds can supply up to 30 kW/m. Although the motivations to move to 48-V system-level power distribution schemes are similar across the several sectors previously mentioned, data centers are perhaps under the greatest pressure to do so. The rate of conversion from 12-V to 48-V in-rack power, by way of either new installation or retrofit, appears to have passed an inflection point as this trend continues to accelerate.

The emergence of 48-V in-cabinet power is strategic for several reasons. 48 V is the highest nominal distribution voltage that can meet Safety Extra-Low Voltage (SELV) standard requirements while leaving sufficient margin for over-voltage protection (OVP) circuits. Systems powered at SELV potentials provide multiple savings over systems requiring higher distribution voltages: they are space-efficient, avoiding creepage and clearance spacing requirements that safety standards specify between devices and PCB traces that connect directly to the supply and those servicing lower-voltage circuits. They use compact and comparatively inexpensive connectors and avoid insulating panels necessary in non-SELV installations to keep technicians from accidently touching energized hazardous potentials. They also eliminate technician training and certification requirements for maintenance and operating personnel for access to non-SELV circuits.

Two key system-level benefits for 48-V in-rack distribution are in copper utilization and distribution losses: for a given power level and bus cross-section, 48-V systems reduce distribution-bus losses by 94% compared to 12-V designs. Alternatively, for a given cross-section, a 48-V distribution bus can deliver four times the power of a 12-V system with the same bus loss. In practice, designs tend to split the savings between lower I2R losses and lower cabling costs. In some applications, such as those in the transportation sector, the lower weight corresponding to reduced cabling requirements brings additional operational savings.

Server power trends

Power dissipation on server motherboards has increased 60% in roughly four years, despite significant efforts across device, process, and IC-design engineering disciplines to improve the energy efficiency of computing resources. Similar to the case of 12-V power distribution within the rack, high processor-current demand is driving the traditional multi-phase power converter — the go-to architecture for the last three decades — to its practical limits.

Even as processor supply currents have continued to rise, the server motherboard has become increasingly crowded near the CPU, constricting layouts and further complicating thermal designs. For example, advanced processors feature four memory channels split between the so-called east and west edges of the processor socket assembly. As a result, the region on the server motherboard south of the processor, historically set aside for the processor’s power converter and associated routing, is now a narrower corridor.

The region, bordered on both sides by high-speed traces servicing memory DIMMs, is now a common area for layout faults due to interference between the power and memory subsystems. Even board designers equipped with parasitic extraction software have run into difficulty with layouts in this area. Challenges accrue from the speed of the signals involved; the high trace density; and, with multi-phase converters, emissions from the large number of passive components in the area.

Additionally, there appears to be little opportunity to increase multi-phase converters’ output current more than incrementally except by adding additional phases. With the available layout space, designs can only do this with de-optimized layouts, which exhibit greater energy losses, adding to an already challenging thermal environment, and poorer load regulation. Similarly, further improvements in controllers, power MOSFETs, and passive components will likely only modestly improve the topology’s power density. Given the space constraints, particularly in applications exploiting four-channel memory, this approach is no longer as practical as alternative power-subsystem architectures.

For example, Vicor’s Factorized Power architecture for 48-V direct-to-load applications uses a simple combination of a PRM (pre-regulator module) and a VTM (voltage-transformation module). This two-chip, single-stage conversion from 48 V directly to the CPU provides efficiencies equal to that of traditional multi-phase designs converting only from 12 V to the CPU. This arrangement also forms a more flexible and, ultimately, more compact processor supply that solves many of the technical issues that are challenging for multi-phase designs to meet (see Fig. 2 ).

fapo_Vicor_ 48V _Figure2_aug2016

Fig. 2: A PRM and VTM form a direct 48-V-to-processor power subsystem that eliminates bulk capacitance and per-phase inductors. Source: Vicor.    

The VTM, a highly integrated 3D packaged device, implements a SAC (sine-amplitude converter) topology so its emissions are low and narrow band compared to those of multi-phase switches and their associated inductors. It also provides greater power density than multi-phase designs, with the single VTM replacing six multi-phase switch stages. The VTM fits in a small footprint, well within the layout constraints of advanced processors supporting four-channel memory without encroaching on the memory subsystem’s layout areas (see Fig. 3 ). The Factorized Power-based processor supply architecture allows for remote placement of the PRM from the VTM, for example, nearer to the board edge and away from other heat sources and sensitive nodes (see Fig. 4 ). Removing the PRM from the thermal and electromagnetic environments adjacent to the processor and memory subsystem helps mitigate those limiting factors on server performance in addition to freeing up additional valuable real estate immediately around the CPU.

fapo_Vicor_ 48V _Figure3_aug2016

Fig. 3: Only the VTM resides adjacent to the processor socket, moving much of the power subsystem’s dissipation away from the processor. Source: Vicor.

fapo_Vicor_ 48V _Figure4_aug2016

Fig. 4: The ability to locate the PRM remotely from the VTM reduces the power-subsystem’s contribution to the thermal and EMI environments adjacent to the processor and memory subsystem. Source: Vicor.    

High-density servers have pushed traditional support technologies to the limits of their scalability. 48-V power feeds reduce capital costs, operating costs, and improve scalability at the rack level compared to traditional 12-V systems. On the sever motherboard, a power scheme like the VTM-based 48-V direct-to-processor approach is a compelling alternative to the traditional multiphase converter, enabling high efficiency and high density designs while also providing flexibility in how designers distribute heat and EMI sources to minimize their effects on processor and memory subsystems. Additionally, VTM-based processor power schemes remain scalable in dense computing environments where multi-phase designs are quickly reaching their limits.    

References:

  1. Cisco Visual Networking Index: Forecast and Methodology, 2014 – 2019, White paper, Cisco Systems, 2015.
  2. World Economic Outlook: Recovery Strengthens, Remains Uneven, International Monetary Fund, Apr 2014.
  3. Anders, Andrae and Peter Corcoran, Emerging Trends in Electricity Consumption for Consumer ICT, National University of Ireland, Galway, 2013.
  4. Gartner Says 6.4 Billion Connected “Things” Will Be in Use in 2016, Up 30 Percent From 2015, Gartner, 10 Nov 2015.
  5. Seaton, Ian, contributing author, Top Data Center Trends and Predictions to Watch for in 2016, Upside Technologies, 9 Dec 2015.
  6. Tanaka, Toru, HVDC power supply system implementation in NTT Group and next generation power supply, NTT Energy and Environment Systems Laboratories, Nippon Telegraph and Telephone Corporation, Feb 2015.
  7. Data Center Rack Cooling with Rear-Door Heat Exchanger – Technology Case-Study Bulletin, Federal Energy Management Program, US Department of Energy, June 2010.  

Advertisement



Learn more about Vicor

Leave a Reply