Advertisement

Data center com links save 65 MWh/year

Fast fiber-optic connects save a lot of energy

BY STEVE SHARP
Corporate Marketing Manager
Avago Technologies
www.avagotech.com

As the demand for online media and applications in a cloud-computing environment has increased, higher bandwidth is required for communications within data centers, server farms, network switches, telecom switching centers and many other high-performance applications. Parallel optical modules are seeing increased use because of their ability to deliver much higher port density per rack and therefore much higher overall bandwidth. The use of 10-Gbit/s parallel optics can also significantly reduce power consumption in the data center and reduce the cooling needed, which further reduces cost of operation.

Power savings

Parallel optical modules require less power per equivalent 10G port than discrete single-channel modules such as 10G SFP+ optics. The parallel modules use a single ASIC device to support 4 or 12 optical lanes. Power per lane for a 12-channel module, such as a pluggable CXP module, is only one quarter that of 12 single-channel SFP+ modules. Figure 1 illustrates the power savings when moving from SFP+ to 4-channel QSP+ or 12-channel CXP modules.

Data center com links save 65 MWh/year

Fig. 1: Power per 10G lane vs. number of lanes.

To further illustrate the power savings, let’s consider a hypothetical large data center with 10,000 servers. If those servers all have 10-Gbit/s links to the switches and routers, they will need 10,000 ports. If SFP+ modules were used, the power dissipation in just those modules would be 10 kW. Using 12-channel parallel modules would reduce that power to just 2.5 kW. Over a month, this would save over 5,400 kWh of energy — enough to supply over five typical single-family homes. Also moving the rack-to-rack connections from single lane to parallel lane optics would result in significant additional power savings.

Cooling efficiency

In the same way that host ICs can be positioned for optimal thermal management, embedded optic modules provide flexibility in positioning to facilitate thermal engineering. A typical layout for a high-capacity rack top 1RU switch with edge-mounted optics is shown in Fig. 2 .

Data center com links save 65 MWh/year

Fig. 2: High-density data center switch showing air cooling.

Cool air drawn in through the rear of the box is heated as it flows over switch ICs and other circuit components before it passes over the optical modules mounted on the front panel, where air exhausts through perforations in the front faceplate. Preheating the air by high-power dissipation ICs presents significant thermal challenges for the edge-mounted optics, and can limit the system density.

Embedded optic modules, mounted on the PCB, can be positioned such that they are not subjected to pre-heated air and, since MTP adapters consume less faceplate area than edge-mounted optics, there would be more area on the faceplate for air exhaust. Higher air velocity through the switch and correspondingly improved fan efficiencies are other potential benefits.

Data center com links save 65 MWh/year

Fig. 3: PC-mounted MiniPOD modules can be optimally located.

To understand and quantify the practical benefits of an embedded optic implementation, a thermal simulation comparison was made between a system populated with the low-power Avago CXP modules, similar to the one illustrated in Fig. 3 , and one populated with mid-board-mounted Avago MiniPOD modules. Both implementations offered the same optical communication capacity and dissipated the same amount of power, but the PCB-mounted embedded modules were optimally located near the air inlet. In these simulations the MiniPOD modules operated up to 13° C cooler than their front panel mounted CXP equivalents. Cooling efficiency can translate into better reliability for the devices or reduced cooling cost, since less cooling is required.

Parallel optical products for data centers

Examples of parallel optical products for data center applications include the recently introduced AFBR-79EIDZ iSR4 QSFP+ 4-channel modules from Avago Technologies, along with the 12-channel AFBR-83PDZ CXP pluggable and AFBR-81uVxyZ/AFBR-82uVxyZ embedded MiniPOD modules. The QSFP+ iSR4 modules integrate four 10G lanes in each direction. The MiniPOD modules deliver the highest front panel density in the industry. For users needing a pluggable solution, the CXP transceivers are one-half the cost per 10-Gbit/s lane compared to standard pluggable SFP+ solutions. Both solutions operate at 25% the power per 10-Gbit/s lane, compared to SFP+ modules. ■

Advertisement



Learn more about Avago Technologies

Leave a Reply