By Marc Cram, Server Technology
Artificial intelligence: It’s becoming more ubiquitous than the cloud. The omnipresent nature of AI is not confined to data centers, nor is it in the heads of MIT Python programmers. It can be found at our local supermarkets in the form of self-directed robots roaming the aisles, embedded in airport security cameras using convolutional neural networks (CNNs), on the mantle of fireplaces in the form of Amazon’s Alexa, and even in the back pocket of our Lucky jeans as Apple’s Siri virtual assistant recommends good sushi within 10 miles.
It has invaded our lives so quickly because AI programming is capable of learning from experience or training and applying that knowledge to future scenarios. In other words, it can perform human-like tasks while adapting to changing environments over time. Regardless of what the AI interface is — a supermarket robot with googly eyes or a sultry Australian female voice pointing us to a good bar — the workload needs to be processed on some type of CPU-based or GPU system such as a DGX from Nvidia.
Because there are so many different types of AI, there is no one best-fit piece of hardware to process the workloads. As Intel’s Naveen Rao puts it, “Customers are discovering that there is no single ‘best’ piece of hardware to run the wide variety of AI applications because there is no single type of AI.” Regardless of what piece of hardware is selected for an AI application, there is most certainly one common denominator: Every one of the devices processing workloads needs to have power.
Data for dollars
For IT managers, AI is having a considerable impact on calculating the needed power for servers. The average power consumption for a server rack is typically 7 kW; however, AI applications could use more than 30 kW per rack. The increase in kilowatts is required because AI needs greater processor utilization, such as applications running on GPU-type servers — they need twice as much power per chip.
Of course, you can try to move data around if your facility is power-constrained, but this could require a lot more energy than it takes to process the data, and power drain is always linked to the volume of data in motion. Simply put, there is a high cost associated with data transfers.
For example, although there are many cloud providers (AWS, Azure, Google, etc.) to choose from, many companies experience a hidden cost associated with complex data transfer prices. A plausible solution to this dilemma is not to move the data but to process it much closer to the point of origination — creating yet another need for edge computing.
Solving big problems with little networks
Edge computing is poised to become the anchor of yet another ubiquitous and much-anticipated data advancement known as 5G. The 1-ms latency times expected from 5G require the support of many distributed processing zones (aka edge networks). Edge-based servers need to be located closer to individuals streaming 4K videos and eventually 8K formats, as well as applications such as cameras used to monitor pedestrian and traffic movements in smart cities — and this is all without sending it to the cloud.
In addition, edge networks will be required to support autonomous vehicles. At the heart of this elegant tapestry of new data streams will be AI processing information in near-real time.
For the time being, all AI applications run on silicon-based computational hardware, whether that is in a custom chip built into your smartphone, in an FPGA deployed inside an edge computing server, or in a purpose-built AI-focused system such as the DGX from Nvidia that is accessed through a public cloud.
And that hardware all requires power, be it DC (like a battery) or AC (from the grid). It also requires connectivity between physical systems, whether over copper or fiber data connections.
Power loss is AI’s kryptonite
AI seemingly holds unprecedented values for humankind. However, when confronted with a power loss, it breaks down and ceases to function — much like Superman’s reaction to kryptonite. A solution to insulate AI from this energy nemesis and mitigate the risk of breakdown is to ensure a reliable power flow.
Switched PDUs with Per Outlet Power Sensing capabilities will enable edge data centers to maximize uptime for all those AI-focused applications requiring near-real-time processing. In this environment, it’s extremely important to select a reliable PDU to power AI workloads for traffic control signals and repeater stations used by first responders to provide dispatch and emergency digital communications to ambulance, fire, and law enforcement services.
Server Technology’s switched PDUs
Given these mission-critical circumstances, smart cities need smart power devices to ensure that their 5G services are flowing. Using remotely monitored and managed data-center PDUs can also help monitor the temperature and environmental conditions within the cabinet.
Back in the data center and colocation facilities, where folks are swapping out CPU-based servers for the more aggressive AI processing abilities of GPU systems, having a scalable rack-mount PDU really comes in handy when a greater combination of C13 and C19 outlets are required — on the way to creating 30-kW racks.
Information’s family tree is rooted in the soil of progress
It all started with a desire to verbally gather information; details to help make decisions. It was not until the telegraphic printing systems were first invented by Royal Earl House in 1846 that information required power in the form of a hand-crank. Ever since then, power and information have been intertwined.
Today, Summit, the world’s most powerful supercomputer located at Oak Ridge National Lab, uses almost 30,000 powerful graphics processors to run deep-learning algorithms to help us address climate change — at a rate of a billion billion operations per second. We are truly at the dawn of a new age when AI will power our lives — but not before we power AI. The choice of scalable and reliable power sources is paramount to ensuring that AI delivers on its lofty promises.
Marc Cram is director of new market development for Server Technology, a brand of Legrand (@Legrand). A technology evangelist, he is driven by a passion to deliver a positive power experience for the data-center owner/operator. He earned a bachelor’s degree in electrical engineering from Rice University and has more than 30 years of experience in the field of electronics. Follow him on LinkedIn or @ServerTechInc on Twitter.
Learn more about Electronic Products Magazine