Advertisement

A common-sense approach to risk assessment

How everyday principles can be applied to safety-critical device development

BY P.J. TANZILLO
National Instruments, Austin, TX
http://www.ni.com

Defined by regulatory bodies like the FDA as “a possible source of danger or a condition which could result in human injury” (USPHS DHHS FDA CDRH: Guidance for Industry, FDA Reviewers and Compliance on Off-The-Shelf Software Use in Medical Devices , September 9, 1999), hazards are classified by levels of concern based on the extent of the potential injury; minor, moderate, or major are the categories that we will use in this article. Probability is another factor, and this represents the likelihood of a situation or hazard coming to pass. When determining the overall risk of a system or component, you must account for both factors.

Engineers do not have much control over the hazard of their device this is determined more by the device’s use and operating environment. The probability of failure, on the other hand, is primarily affected by the engineer. Almost every decision from start to finish during system design has an impact on this probability.

Requirements gathering

Nearly half of all project costs are attributed to rework as a result of inadequate requirements, and it is estimated that more than 75% of all software bugs are a result of inadequate requirements. Though most development processes have one or more explicit phases dedicated to requirements gathering, it is clear that the techniques used in these phases often fall short.

Interviews are the most common method of requirements gathering since they are easy to conduct and yield results quickly. They are an important first step; however, they are almost never sufficient and only useful when asking the right questions to the right people. Each group should provide various bits of information – doctors can help you understand expected ranges of measurements, patients can help provide feedback on form, fit, and function, and internal stakeholders can give feedback on things like cost of goods. The problem with interviews is that people typically don’t know what they want until after they are presented with a concrete set of options.

Modeling is a way to represent the reality of the end device by providing a user experience similar to the intended end behavior. These can take on many forms – wooden models of mechanical systems help to facilitate understanding. Other times, software models of systems can help approximate user experience. The intention of a model is to facilitate understanding and generate feedback long before anything is actually built. This is an attempt to solidify the details of the device without taking on the expense of producing a physical unit.

A common-sense approach to risk assessment

Fig. 1. A functional prototype can be used to prove feasibility and to gather more specific feedback on a design.

Functional prototyping is a follow-on to modeling where a functional version of the device is built using off-the-shelf tools so that users can try out the device in its normal operating environment. Typically, the prototype is developed using as few custom parts as possible to reduce the development cost and time; the purpose is to facilitate understanding and generate feedback that cannot be derived from a model such as performance, discomfort, operating environment and usability. In many cases, addressing the feedback will require redesigned elements, so minimizing the development effort for a functional prototype is always recommended.

System architecture

Another way to minimize patient risk in a medical device is to choose a system architecture with various layers of redundant protection for the most hazardous elements.

When designing complex digital or mixed signal hardware, an ASIC is a common choice. This provides the reliability of a hardware circuit made from discreet elements without the complexities of manufacturing and assembly. However, the fabrication costs of an ASIC can prohibitively be high, so unless volume production is a certainty, ASICs are often excluded from new designs in favor of FPGAs.

An FPGA provides the reliability of an ASIC with the configurability of a software-based device. Though individual unit costs are much higher than an ASIC, overall production costs are lower for most designs. In addition, since an FPGA can be configured and reconfigured time after time, they are often a better choice for new designs where requirements and implementations are more likely to change.

For software-based systems, the most reliable is a simple 8-bit microcontroller. These devices often implement simple tasks like updating a display and monitoring buttons. They are at a lower risk for failure, but the scope of what can be accomplished is far smaller than more feature rich processors.

More complex systems require more powerful processors with more memory. Most often, these types of systems are built using 32-bit processors with real-time operating systems including things like TCP/IP stacks and file systems. With added features comes more complex middleware, and with additional complexity comes the risk of failure. Most such operating systems provide watchdog timers as well as other failure mitigation techniques so to detect a software failure and gracefully recover.

The most complex systems that require computationally intense algorithms or extremely rich user interfaces require desktop computing capabilities. Since most desktop-class operating systems are not tested and validated to the level of reliability necessary to control a medical device, designers should layer additional elements in the system to monitor for failure and minimize patient risk.

A common-sense approach to risk assessment

Fig. 2. A common way to address risk in medical devices is by choosing architecture with several layers of protection.

Figure 2 shows an example architecture that provides three layers of protection and therefore lowers patient risk. There is a rich user interface on a touch panel display running a desktop operating system and this system is connected to a 32-bit processor running an RTOS, which provides failure checking and a second layer of reliability. A third layer is added by including an FPGA in the signal path to ensure that no signals go outside of the safe and acceptable operating range.

Verification and validation

Verification and validation is clearly part of quality management. Best practices can also be derived to help reduce the risk of failure and focus resources in the most efficient way possible.

Focus should be on testing for the highest risk areas. Along with hazard, high risk elements can be located in a number of ways. Complexity analysis can help you determine what components are statistically most likely to fail. When coupled with test coverage tools, you can ensure that you are testing all of the paths of the most complex hardware and software. In addition, certain situations can be identified as high risk for failure and therefore, their test plans should be the most rigorous. Some of these situations are user interface interactions, driver data transfers, and software data conversions.

In the end, risk assessment is not a foreign concept. We all must make decisions every day that balance the hazards of a situation with the probability. These same principles apply to device development, where common sense and good judgment are the most important factors in developing safe, reliable and effective devices. ■

Advertisement



Learn more about National Instruments

Leave a Reply