Advertisement

Software interface spurs DSP subsystem use

Software interface spurs DSP subsystem use The RMI layer in Windows accelerates the use of DSP subsystems for multimedia applications BY BRUCE THOMPSON and MARK GROSEN Spectron Microsystems Goleta, CA One of the greatest potential growth areas for the desktop computer environment is digital-signal-processing-chip-based subsystems. These subsystems can handle multiple functions like fax/modem, audio processing, speech compression and recognition, telephony, and image compression. These subsystems employ a single fast DSP chip, under the control of a preemptive multitasking DSP operating system, to handle several of these tasks concurrently. Therefore one such subsystem can replace multiple single-function add-in cards. The preemptive multitasking DSP operating system is necessary to provide a standard access and control mechanism with hardware independence at the application and DSP software levels. Establishing hardware independence and a standard way for applications to communicate with DSP subsystems is necessary to spur the market for several reasons: * Application programs won't sell well if the program supports only one specific DSP board. Programs must support a range of hardware subsystems including add-in cards from multiple vendors and motherboard implementations. * DSP subsystems handle multiple functions and are functionally open-ended. * Modern computing environments like Windows allow a group of application programs to mix and share resources. * A single DSP subsystem handling multiple functions needs a hardware-independent resource-management mechanism to ensure that tasks don't conflict and that all tasks are handled in a timely manner. Windows itself cannot do this because it is not a preemptive multitasker. Fortunately, environments like Windows have already addressed the basic issue of separating application programs from the hardware. Windows programs make calls via application programming interfaces (APIs) to handle functions as simple as writing a character to the display. Microsoft originally defined sets of APIs for the various functions that would be needed by application programs. APIs isolate hardware Applications make API calls to executable code modules called dynamic link libraries (DLLs). For example, the Graphics Device Interface (GDI) DLL handles all of the API calls necessary to output text, graphics, and images. Microsoft encourages software vendors to establish and use the API access mechanism as they develop programs to provide new functions and access new types of peripheral hardware. The vendors basically create a new API set for new functions. For example, Delrina Technology developed an API set for fax functions when it developed the WinFax program. When a program class, such as fax programs, becomes popular, Microsoft creates or endorses a new set of APIs as a standard. In fact, Microsoft is now working on standardizing an API set for fax functions. Once established, APIs let any software vendor offer support for any compliant hardware (essentially any hardware with compliant driver software). In this fashion, APIs for several multimedia-type applications have been established. Other established APIs that could provide access to DSP subsystems include the telephone API (TAPI), and the multimedia API. The latter includes support for MIDI (Musical Instrument Interface Device) music-synthesis applications, and for Wave audio recording and playback applications. TAPI provides control of phone interfaces. For example, TAPI would be used to take a phone line off hook, to dial a number, or to handle the interface to telephone-line switching devices. While the existing Windows multimedia architecture supports single-function devices like sound cards with no problems, a move to multifunction DSP peripherals requires further hardware abstraction. A look at how application programs access the MIDI and Wave APIs today illustrates the importance of further hardware abstraction for DSP-chip-based subsystems. Figure 1 depicts the data flow used by an application to access audio capture and playback hardware on a sound card. The user selects a WAV file and clicks on the “play” button in the Windows Sound Recorder application program. The application makes a Wave API call, which is handled by the Windows DLL MMSYSTEM. MMSYSTEM processes the API call and passes it to the appropriate hardware-specific driver–for example, the driver for a Sound Blaster sound card. And the audio program begins playing. Meanwhile, a fax application could be communicating with a dedicated fax card in the same manner. Simply replacing the fixed-function sound card with a DSP-based sound card causes no problem in the existing architecture, as long as the DSP-based card serves only as a sound card. The need to further separate the software and hardware arises when the DSP subsystem is used to handle multiple functions. Multiple functions spur changes Microsoft has recognized both the potential for growth in DSP subsystems and the potential for conflicts in the existing Windows environment. Together with Spectron Microsystems, Microsoft has devised a scheme to use a hardware- and function-independent DSP Resource Manager Interface (RMI) to support multifunction DSP subsystems. The RMI is detailed in a preliminary specification that Microsoft is distributing for review by interested parties. (The table details the APIs defined in the preliminary specification.) Figure 2 depicts the proposed structure of the Windows multimedia environment with the RMI in place. The RMI serves as a hardware-independent multiplexer to the underlying DSP-board-specific drivers. The RMI doesn't include knowledge of the hardware or even what types of functions the hardware supports. Rather, the RMI establishes and manages communications channels between the function-specific API and driver level and the board-specific drivers. Typically the RMI would route multiple channels of command sequences to a single DSP subsystem. The proposed architecture offers added value, however, because the RMI can support multiple DSP subsystems of the same or different types. As shown in Fig. 2, the Wave and fax example described above can be handled by the Windows DSP architecture. The API calls from the different applications and for different functions are routed through the RMI to the DSP-board driver. The DSP driver, in turn, passes the commands to a driver hosted by the multitasking operating system executing on the DSP board. DSP handles concurrent tasks To understand how the DSP subsystem handles the concurrent tasks, consider the example of Wave and fax tasks depicted in Figs. 3 and 4. Figure 3 shows the state transition diagram of a task under the multitasking SPOX operating system. Figure 4 shows how the Wave and fax tasks are concurrently handled by the operating system. During actual operation, an advanced DSP chip can handle both example tasks, and more, without interrupting or delaying the individual functions. Therefore the DSP subsystem provides applications with the illusion that each has complete control of the DSP resources. Conflicts only occur based on the availability of actual hardware resources on the board. For example, a board that only includes a single DAC and speaker connection couldn't support simultaneous Wave playback tasks. Conversely, the subsystem could support a playback task while capturing, compressing, and storing audio from another source. Data flow is hardware independent While the proposed software architecture isolates the application programs and RMI APIs from the hardware, it also encourages hardware independence at the Windows DSP driver and the DSP-board software levels. Further separating hardware and software allows board and algorithm developers to reuse software across different DSP boards. To understand this second level of hardware abstraction, look back at the table of RMI APIs and how they are used in the data-flow diagram in Fig. 5. When an application needs access to the DSP resource it makes a function-specific call to open a resource–for example the program makes an open call to the Wave API in MMSYSTEM. Via the underlying device-independent, but function-specific, Wave driver, MMSYSTEM passes the open call through the hardware- and function-independent RMI dspOpenTask() API. The open call establishes the communication link depicted in Fig. 5. The link supports a single control channel implemented with a mailbox metaphor and serviced with the dspGetMessage() and dspPostMessage() APIs. Simultaneously, the link supports one or more data channels controlled by the dspAddBuffer() and dspWriteBuffer() APIs. This communication architecture results in increased device independence at the Windows DSP driver, SPOX host driver, and SPOX application levels. For example, the Windows DSP driver must have knowledge of the DSP board address for communications, but doesn't require knowledge about the capabilities of the board. The SPOX host driver needs similar minimal hardware knowledge, it primarily must support the data channels to the high-level DSP application drivers such as the fax and Wave drivers on the DSP board. The fax and Wave drivers on the DSP board only need to know how to make calls to the low-level DSP algorithms. The efficiencies of independence To the equipment maker or software developer, this layering structure means that the Windows DSP driver, the SPOX host driver, and the high-level DSP drivers can be written in C and easily ported to any SPOX-based DSP environment. These environments include those based on the Analog Devices 2100, Motorola 56000, and Texas Instruments 320C30/40/50 families. The actual DSP fax and Wave algorithms, which are likely coded in assembly language, are specific to a particular DSP board. But those algorithms, which are often supplied by the silicon vendor or third parties, can be used by any equipment maker developing around SPOX and a specific processor. The ease of portability also offers advantages to the user. The portability should make proliferation of DSP technology happen quickly, thereby ramping production volumes and lowering prices. Architecture ensures extensibility One final, and extremely important, reason exists for the device- and function-independent DSP RMI. The use of digital signal processing is in its infancy–audio, music, telephony, and fax applications are just the beginning. Therefore the DSP RMI must be defined so innovators at the application software, DSP software, and DSP hardware levels can add functions. For example, no speech recognition or text-to-speech APIs exist today, but DSP chips are prime candidates to host these applications. To this end, the RMI is defined to ensure this extensibility. CAPTIONS: Fig. 1. Windows applications now use API calls to access device drivers for different pieces of hardware. Fig. 2. The proposed DSP RMI would allow separate applications to make API calls to the same piece of hardware. Fig. 3. Each task running in the SPOX operating system has four possible states. Fig. 4. The play and fax tasks are switched between running and blocked as long as both need cycles. High-priority tasks take precedence. Fig. 5. When a link has been established, the Windows driver communicates with the SPOX task by the DSP RMI APIs. OVERLINE: Software interface for DSP subsystems

Advertisement

Leave a Reply