Advertisement

Cost, performance, power efficiency for next-gen PoE

Cost, performance, power efficiency for next-gen PoE

Careful design is necessary to ensure that the system wastes as little energy as possible and costs as little as possible

BY DANIEL FELDMAN
Microsemi, Analog Mixed Signal Group
Santa Clara, CA
http://www.microsemi.com

The industry is rapidly migrating from standard-power IEEE 802.3af to high-power 802.3at, with significantly different system design implications. The new IEEE 802.3at specification has the potential of dramatically improving the usefulness of power over Ethernet (PoE) by bringing the convenience and enhanced mobility of a universal power/data jack to portable devices such as notebook computers and remote security cameras. However, high-power PoE also requires a keen understanding of engineering compromises that must be made to optimize cost and functional integration within an extremely tight overall system power budget.

There are special power considerations for IEEE 802.3at, depending on the platform, from midspans and splitters to switches and test equipment. This article will address these considerations from the perspective of a manufacturer whose expertise spans the full range from silicon to a wide variety of end products.

While a high level of integration was of paramount importance with earlier standard-power PoE designs and continues to be important in high-power implementations, the new high-power specification imposes significantly more challenging power management issues and requires careful consideration of which functions and what kinds of intelligence to integrate on-chip, using special circuit design and other techniques to optimize the solution. These issues are unlike those that silicon manufacturers have faced when they pursued traditional integration paths. Like perhaps no other networking technology, PoE demands much more attention to — and a unique chip- and system-level perspective on — power dissipation vis-a-vis the special characteristics of the various types of devices and platforms that must be powered, and a consideration of the total power budget for networks with as many as hundreds of devices being powered.

Origins of PoE

Since the end of the 19th century, following the principle of operation for Bell’s 1875 telephone, power and data were sent together on the same electrical cable. In the 1970s, communication systems initially developed with only data in mind, such as Ethernet (IEEE 802.3), did not have provisions for power. At the end of the 20th century, with the data transmission rates becoming large enough to ensure the transmission of data packets, and the evolution of VoIP protocols, it became clear that Ethernet had to adapt to enable VoIP to be as simple to use and reliable as traditional and digital telephony.

At the same time, wireless LAN protocols became sophisticated enough (again, with enough bandwidth), to be a replacement for wired Ethernet in some applications, requiring the WLAN access points to be placed in strategic positions where ac power is not necessarily available. These two applications, VoIP and WLAN, triggered the IEEE802.3 working group to create the IEEE802.3af task force in 1999, in order to enable the transmission of data and packets on the same CAT3 (or above) Ethernet cable.

802.3af apps and beyond

After 4 years of work, the IEEE802.3af task force created the first PoE standard, which allowed supplying 12.95 W to powered devices. This was enough to power most of the target applications, including not only VoIP phones and WLAN access points, but also network cameras, embedded thin clients, barcode RFID readers, access control application, and others.

Nevertheless, this low-power limit prevented PoE from powering several devices with higher end features in the market. These include Video phones, multi-channel Access Points, outdoor applications such as Fiber to the Home Optical Network Terminators, IEEE802.16 subscriber stations and even notebooks. To address these, in 2004 the IEEE802.3 working group created the PoEPlus study group, which in 2005 became the IEEE802.3at, with the goal of providing at least 30 W for devices powered over Ethernet cables. Po is now geared toward transforming the RJ45 connector into the universal power socket…. But is that so simple?

PoE efficiency, worst case

One of the least discussed aspects of PoE is overall system efficiency. To put things into a simple perspective, PoE is replacing a local power supply. Power is converted from 100–240 Vac to 44-57 Vdc at the output of the PoE power-sourcing equipment (PSE), then delivered over a cable that can be 300 ft long to enter the powered device at a voltage of 37 to 57 Vdc and then be again converted to the several voltages at which the different circuits work (5, 3.3, 2.5, 1.2, 0.9 V, etc…). See Fig. 1.

Fig. 1. From a system-efficiency point of view, having a voltage as high as possible at the PSE power supply is the best practice.

Thus the overall efficiency of a PoE system can be calculated by the formula:

PSE power supply efficiency

x PoE PSE circuit efficiency

x channel (cable, patch panel, and connectors) efficiency

x PoE PD circuit efficiency

x PD dc/dc efficiency

While at first glance this seems to be a system in which losses will occur and there is not much that can be done about it, the careful observer will note that there is a major difference between the PSE and the PD. While the designer of the PD must assume a worst-case power conversion of 57 Vdc down to its lowest voltage (such as 3.3 V) for power dissipation purposes, the PSE designer has freedom to determine the voltage it outputs. Say, for example, an IEEE802.3at powered device requires exactly 29.52 W to operate and it is placed at the end of a 300 ft CAT5 channel (resistance is 12.5 Ω), the PSE circuit resistance is 0.65 Ω,the PD circuit resistance is 0.58 Ω plus a diode bridge and the source is 110 Vac. The PSE designer has two options: either use a power supply with a 56-V minimum voltage or a power supply with an output voltage of 51 V. In the 56-V system, current will be 616 mA, while in the 51-V system current will be 720 mA.

56-V system efficiency

= Eff 110→56V x 99.3% x 86.1% x 97.2% x Eff 46.54→3.3V = 83.1% x dc/dc efficiency

51-V system efficiency

= Eff 110→51V x 99.1% x 82% x 96.5% x Eff 39.58→3.3V = 78.4% x dc/dc efficiency

Thus the 56-V system has 5% higher efficiency than the 51-V system. Note the low overall resistance of only 0.65 Ω inside the PSE circuitry.

PoE efficiency, typical case

Another aspect of efficiency has to do with the power dissipated by the huge PoE power supplies when then are not being used. If an IT manager buys, for example, a 48-port switch with full power per port (800 W), but uses only to power 20 ports, the power supply may be operating very far from its optimal efficiency state, that is, it will be wasting a lot of quiescent power (such as, 80 W). At below maximum power loads, the quiescent power can be typically 10% of the power supply level.

This means that by choosing a non-full-power supply, the IT manager may be reducing wasted power by half.

It should be clear that the use of the available power will be maximized when an accurate measurement of power consumption is in place and a sturdy algorithm is used to allocate dynamically power to the diverse ports, per priority.

But that’s not all: smaller power supplies are also cheaper. And in case a user does want to have full power per port, the switch vendor can offer a power-additive solution via an external power supply. This requires a smart management of the power supplies available on the system, which can be done, for example, with Microsemi’s Emergency Power Management.

Cost implications

So, if it is clear from the system efficiency point of view that having a voltage as high as possible at the PSE power supply is the best practice, how about cost? Fortunately, a PSE power supply cost is typically directly proportional to the difference between the input and output voltages. This means that the PSE power supply supporting 110→56 V will be cheaper than the power supply supporting 110→51 V. And on the PD side, the power supply needs to be designed for worst case (57 V), which means the voltage on the PSE does not affect its cost.

Another advantage is that, due to the higher efficiency of working with a higher-voltage, support of a given PD load is possible with a smaller power supply. The lower heat dissipation in the system would cause also the fans to be either smaller or to operate at lower speeds, again saving cost. ■

For more on power-over-Ethernet products, visit electronicproducts.com/search1.asp?StartNum=1&slot=0&year=10&stype=K&keyword=poe

Advertisement



Learn more about Microsemi

Leave a Reply