Infineon Technologies AG recently unveiled a roadmap for energy-efficient power supply units (PSUs) that specifically address the current and future energy requirements of AI server and data-center applications. The addition of new 8-kW and 12-kW PSUs will help further increase energy efficiency in future AI data centers. With its 12-kW technology, Infineon claims it is the industry’s first PSU that achieves the higher energy efficiency, power density and reliability necessary for future data centers.
The training of large language models (LLMs) for AI that feature billions and possibly trillions of parameters represents a major challenge for data centers globally. Future AC/DC server power supplies will require power ratings that are above the 5.5-kW level now specified by Open Compute.
A recent Goldman Sachs research report puts it into perspective, predicting that AI is poised to drive a 160% increase in data-center power demand. According to the report, “On average a ChatGPT query needs nearly 10 times as much electricity to process as a Google search. In that difference lies a coming sea change in how the U.S., Europe and the world at large will consume power—and how much that will cost.”
According to Matthias Kasper, lead principal engineer, power operated systems at Infineon Technologies Austria: “The projected data-center consumption in 2030 will be 7% of the global electricity consumption, comparable to India’s current electricity consumption. This trend is driven by training AI models that need a vast amount of energy, for example, to train the GPT-3 LLM. The need for 8-kW and 12-kW power supplies is directly linked to the fact that the power consumption of AI training doubles every three to four months.”
The need for higher power ratings, from 800 W to 5.5 kW and higher, is driven by the growing power requirements of graphic processor units (GPUs) that currently require up to 1 kW per chip and will require 2 kW and beyond by 2030. This will lead to higher overall energy demand for data centers.
The technology behind the roadmap
At the heart of the PSUs is the integration of three semiconductor materials—silicon (Si), silicon carbide (SiC) and gallium nitride (GaN)—into a single module. “The superior device characteristics of WBG [wide band-gap] semiconductors—SiC and GaN—allow us to operate at higher switching frequencies for reducing the size of the power supplies and pushing more power into the same form factor,” Kasper explained.
Infineon’s new PSUs also contribute to efforts to limit the CO2 footprint of AI data centers despite rapidly growing energy requirements. This is made possible by the high level of efficiency that minimizes power losses.
“It can be shown that a holistic approach, considering the best combination of topology, control concept and semiconductor technology, is required to push the performance boundaries of existing solutions,” he added.
Claiming unprecedented PSU performance classes, Infineon enables cloud data center and AI server operators to reduce energy consumption for system cooling and power consumption. In addition, the reduction in CO2 emissions will lower lifetime operating costs.
The new generation of PSUs achieve an efficiency of 97.5%, while meeting the most stringent performance requirements. The new 8-kW PSU supports AI racks with an output of up to 300 kW and more. Efficiency and power density is increased to 100 watts per in³ compared to 32 W/in³ in current 3-kW PSUs, resulting in additional benefits for system size and cost savings.
Given that the training of AI models is hardware dependent, and any downtime of the server results in restarting training from the last setpoint, power delivery is critical. Two separate AC feeds to the servers provide N+N redundancy so that any failure of the power supplies or an AC connection drop will not interrupt the training.
The 8-kW PSUs will be available in the first quarter of 2025. For the 12-kW PSUs, mechanical demonstrators will be available in October 2024, with the first measurement results beginning in 2025.