Advertisement

Software development on virtual platforms

Speeding time to market for low-power devices

BY FRANK SCHIRRMEISTER and FILIP THOEN
Synopsys
Mountain View, CA
http://www.synopsys.com

Recent IBS market research [Global System IC (ASSP/ASIC) Service Management Report Volume 15, No. 11: Software Trends] indicates that today in 2008 the development effort for software running on 90-nm chip designs has already surpassed that of hardware development. Looking ahead to 2011, the same report projects that less than 40% of the chip development cost will be spent on hardware.

These statistics show that software comprises an increasingly larger portion of the overall chip development efforts, and as a result, has become the bottleneck. Adding to the challenge, design teams now realize that power envelopes have effectively stopped the traditional evolution of processor performance scaling.

To meet the stringent low-energy-consumption requirements, design teams must now add multiple processors to their designs, which in turn increases the software development challenges on multiprocessor systems on chips (MPSoCs) even more.

Design for low power

Design for low power is equally important to optimizing energy consumption, especially in mobile devices. While power is a measurement at a specific point in time, its limits clearly define which functions in a design can run in parallel. Energy, on the other hand, is a measurement over time that determines battery life for mobile devices.

Software development on virtual platforms

Fig. 1. Power consumption in an example wireless mobile design.

Given the trends outlined above, it is important to understand where and when power is consumed within a design. Figure 1 shows an example design of a mobile phone. The data for energy consumption has been derived from individual datasheets in combination with a teardown report.

As the analysis shows, almost 80% of the energy consumption takes place in multimedia, modem, and memory operations. For such functions, designers have several hardware and software implementation choices. Flexibility requirements again drive a trend toward software, which in turn has high impact on memory accesses as well. As a result, a significant portion of the energy consumption is determined by software.

Traditional design flows

An initial design phase is followed by prototypes for validation and software development. While an initial production phase commences after about a year, a first re-spin in parallel is necessary and leads to a second set of prototypes after six more months.

Still, a second re-spin becomes necessary to optimize yield before the actual volume production starts. The semiconductor provider can then finally collect the returns on its investments almost two and a half years after the start of the project.

The key flaw in such a traditional design flow is the late availability of execution vehicles for software development. In addition, hardware prototypes become available nine months into the project, are often one-off developments, and provide very limited control and visibility into the design.

In contrast, virtual platforms offer an ideal solution. They are often spun off the very initial architecture design phase as early as six to eight weeks after the architecture has been determined. Virtual platforms are fully functional software models of a system-on-chip (SoC), board, I/O and user interfaces that execute unmodified binary production code in almost real time. Given that they are software simulations they can be easily controlled an issue especially important in multicore designs they offer unparalleled visibility.

Transaction-level virtual platforms

During the past decade, several vendors have introduced proprietary solutions for virtual platform creation and deployment. While some customers have adopted these solutions with high enough project pressure to accept proprietary offerings, mainstream adoption has been lagging.

Over the past two years, the Open SystemC Initiative (OSCI) has developed the TLM-2.0 API specification based on key contributions from Synopsys and others in early 2007. The TLM-2.0 API finally provides the appropriate transaction abstractions to enable interoperability of models in SystemC-based virtual platforms without sacrificing execution speed, which is required for efficient presilicon software development. The TLM-2.0 standard has passed public review, was just ratified in June 2008, and will subsequently be contributed to IEEE. It has also paved the way for the three following key technologies:

1. The introduction of loosely timed (LT) and approximately timed (AT) modeling styles, which allow processors and peripherals to be modeled accurate enough to enable binary compatibility of software developed on a virtual prototype with the real hardware, while at the same time omitting enough detail to allow fast simulation. The recently introduced modeling styles allow for scalability of accuracy, allowing flexible annotation of timing information.

2. While older approaches synchronized models at every clock, instruction, or transaction, TLM-2.0 enables “temporal decoupling” of processors. In a multicore system, processors can now can run freely and synchronize only at borders of larger user-defined units, such as every 1,000 or 10,000 executed instructions. This technology allows very fast execution of multicore systems.

3. Formerly, proprietary back-door accesses for processors to directly access instruction and data memory – without having to cause transactions within the simulation infrastructure have been standardized as direct memory interface (DMI). Processor models using DMI can now be integrated into SystemC virtual platforms without risking degradation of execution speed.

Using these three technologies in combination, virtual platforms can now run faster than 50 MIPS and sometimes even reach several hundred MIPS of execution speed. Providing these technologies within a established infrastructure such as SystemC enables a tight connection to the rest of the verification and implementation design flow based on classical register-transfer languages (RTLs) like Verilog and VHDL.

Power analysis

The primary use models for virtual platforms are pre-silicon embedded software development and development of post-silicon validation test prior to silicon availability. However, they are available early enough to allow architectural and power analysis. Analogous to annotation of timing to SystemC TLM-2.0 compliant models, users can annotate transaction-level models with characterized power information to enable detailed power analysis.

Software development on virtual platforms

Fig. 2. Power analysis on a virtual platform of Freescale’s i.MX31 applications processor

Figure 2 demonstrates the top-level of a virtual platform of Freescale’s i.MX31 Applications Development System utilizing transaction-level models and instrumented with characterized power information. Several control and visualization elements are shown, including a software debugger attached to the ARM processor core executing the actual application software while providing associated power information. Various power related windows show the power control window, including core voltage display, WindowsCE executing on the platform, a console, register view and a virtual representation of the keypad control.

To enable this type of power analysis, processor models are instrumented to reflect power consumption for the “active,” “dormant,” “inactive,” and “shutdown” states. Peripherals are characterized for the states “clock off,” “idle,” “typical,” and “maximum.” Memory power consumption has been characterized for read/write transactions, as well as for the “idle” and “clock off” states. Power consumption characterizations are based on budget planning, estimations or actual measurements.

Users can thus gain visibility into the global system state with individual clock and power domains. They can also assess power tradeoffs and different power schemes, taking into account the actual system software executing on processors. Specifically, power management software can be developed and optimized early.

Outlook

With the recent ratification of TLM-2.0 in June 2008, the industry is leaving the era of proprietary virtual platforms behind to enter a phase of model scalability and model interoperability across simulators. As a key contributor to the standardization, Synopsys offerings are already TLM-2.0 compliant. Fast virtual platforms can now be integrated into any OSCI-compliant simulator as well as Synopsys Innovator using TLM-2.0-compliant components available in the Synopsys DesignWare System-Level Library. In addition, Synopsys modeling services offer unique TLM-2.0 expertise.

TLM-2.0 virtualization of embedded hardware for pre-silicon software development builds naturally on existing mature technologies maintaining the linkage back into verification and implementation flows. Now that standardization has taken place and first tool offerings are on the market, virtual platforms are finally ready for mainstream adoption. ■

Advertisement



Learn more about Synopsys

Leave a Reply