Power-smart hardware meets power-aware software
A power-aware RTOS dynamically balances performance and efficiency
BY STEPHEN OLSEN
Mentor Graphics
Wilsonville, OR
http://www.mentor.com/embedded
As the demand for energy efficiency increases for both mobile and connected devices system developers are increasingly working with power-smart hardware solutions, which are complemented by more power-aware code.
At the forefront of power-smart hardware technologies are multicore SoCs capable of running at variable frequency and voltage. There are also CPUs and peripherals that support multiple power modes. We’ve seen inroads in hardware, but what is required to make these developments truly pay off is a new kind of operating system.
Your father’s RTOS is no longer an option in this dawn of power-efficient embedded design. What’s needed is a power-aware RTOS that is intimately more involved with processing the CPU’s various modes, while balancing overall system responsiveness and power consumption.
Hardware technology trends
Batteries now deliver higher capacity in smaller form factors. User interfaces that were once a small set of momentary membrane switches have been replaced by ever larger backlit touch-panel-controlled LCD displays with advanced features making devices easier to use, at the cost of increased power consumption. And while the LCD technology has made great advancements, it has driven the need for backlights, which demand more power.
Modern SoC designs have adopted dynamic voltage and frequency scaling (DVFS). In the simplest scenario, lowering the frequency brings down power, and the system can also lower the voltage to the CPU, which has a multiplying effect on power savings. CPU cores offer varied power states: run, sleep, doze, and snooze.
The lower the CPU’s power state, the longer it takes for the CPU to wake up. The deepest power-saving levels will save register states of the hardware and put DRAM into self-preservation mode, but deep power savings will also manifest as sluggish behavior when the system first wakes up.
Today’s designs often incorporate a power management IC (PMIC) that supports the dynamic voltage portion of DVFS. When the frequency is maximized in the SoC, the voltage must also be maximized to maintain the transition times and at low frequencies the voltage can be lowered.
Figure 1 is an example of an optimized and a non-optimized system. If the system is not tuned for power consumption, the processor gets the job done quickly, sooner than what is actually required. Decreasing the frequency enables a decrease in the voltage necessary, which means deadlines can be met and power can be saved.
Fig. 1. The scheduling of tasks: non-optimized vs. optimized system.
With all of these technological advancements in hardware at our disposal, software is uniquely positioned to control the power consumption of the overall system. The ability to maximize power savings on all embedded devices requires a rather complex power management system. Software for power management can be broken down into reactive and proactive power management.
Reactive power management
The most basic approach to power management is reactive power management. This involves dividing the IC into power domains, which can be register controlled either by enabling low-power states or by gating the clock to some peripheral devices. Reactive power management monitors when a device is being used, detects when it is inactive, and reacts by placing the power region into a lower power state.
For example, when a device’s user interface is activated – by the user accessing the touch panel – the device is active and it will remain in full power mode. If the panel is inactive, a timer expires and the power domain’s state machine will transition to a lower state. The first timeout event may be to lower the backlight to 50% brightness. A second timeout event could turn it off completely.
Proactive power management
Proactive power management exploits the idea that the system can predict future use, but it really can’t. What is possible is for the system to profile each task, as well as introduce complex scheduling techniques that predict what the task needs will be when the system is in operation, and schedule the voltage and frequency accordingly. The profile data can be programmed manually with an actual power-use scenario, or by dynamically measuring what the system is doing.
Tasks can be monitored for what APIs they are accessing, what devices they are using, and how much time they are consuming each time they are readied for scheduling. The data can then be collected and stored along with a history of recent execution schedules per task and used as a profile of how much processing is needed to get the job done.
Using DVFS, a system developer can accomplish large savings in power, but it comes at a cost as extra power is consumed by switching between low- and high-power states. Switching from low to high frequency means we need to first scale the voltage to the predetermined level acceptable to the CPU frequency desired. The process of lowering the frequency can be instantaneous, but when scaling the voltage, there is a slew rate and it will take some time to settle at the optimal settings, as depicted in Fig. 2 .
Fig. 2. With DVFS, the frequency scales quickly and the voltage changes more slowly at a set slew rate.
The power management framework must take into account that it may be better to leave the system at a high-power state than to scale down DVFS and then scale it back up to meet the needs of a new task.
Addressing SMP systems
A symmetric multicore processing (SMP) scheduler runs a single instance of a task scheduler across all the cores it controls. If there are two cores, and two tasks ready to run, they both can run simultaneously. If DVFS is set to the same frequency, any task can run on any core. In reality, this becomes more complex if we have varied DVFS settings on each core separately adjusted.
As today’s SoCs gain in SMP complexity, it is not uncommon to find systems supporting four or more symmetric cores. The increasing need for scheduling across multiple cores at possibly varied frequencies increases the complexity to manage the DVFS aspect of each of the cores along with scheduling tasks on each core. ■
Advertisement
Learn more about Mentor Graphics