Advertisement

Five top practices for better test software

Techniques for increasing performance, implementing a process model, developing modules, and easing validation/deployment

BY SANTIAGO DELGADO
National Instruments, Austin, TX
http://www.ni.com

With a history that spans more than two decades, automated test software development has matured enough for developers to identify numerous best practices. Engineers from around the world have become experts and written extensively on different topics that impact test software developers. The following article consolidates these experts’ top five best practices on increasing software performance, implementing a process model, developing test modules, simplifying software deployment, and facilitating software validation.

Increasing performance

To increase performance, developers can optimize multiple components in test software architecture such as the test executive engine, process model or code modules. Some components provide a better opportunity for improvement than others.

Aaron Gelfand, senior systems engineer, and Daniel Elizalde, product development manager, at VI Technology, found that a quick way to increase performance involved taking advantage of compilation settings. For example, NI LabVIEW software developers run their code using the LabVIEW Run-Time Engine instead of the LabVIEW development environment to reduce execution time. In other programming languages, such as LabWindows/CVI, developers use compiler optimization settings and generate release instead of debug versions of their compiled code to increase performance.

Some opportunities for performance improvement involve software and hardware. For example, on initializing a session with a device, some instrument drivers transfer large amounts of data to verify the configuration of the instrument. Gelfand and Elizalde noted that to minimize the performance impact of reinitialization, developers share a single reference to the instrument across multiple tests in a sequence. In addition, test engineers increase the use of their existing instruments by implementing parallel test. By testing multiple units in parallel, instruments that would otherwise remain idle perform measurements and reduce the average test time of each unit.

Using a process model

To increase software reusability, developers must abstract tasks that are common in multiple cases. Mathieu Daigle, a software engineer at Averna, has seen how test software tasks such as logging results and generating reports are common across all units under test (UUTs). Instead of implementing these tasks for each UUT in a sequence, developers abstract them into one common process model.

In a perfect world, process model functionality would not need any customization, but in practice, some UUTs require custom functionality. Daigle proposes callbacks as a way to customize process models based on different test sequences.

Callback implementation in the process model depends on the functionality the callbacks provide. If a callback should not execute by default, the developer omits implementation in the process model and a UUT sequence overrides the callback functionality if needed for the UUT. In contrast, if the callback needs default functionality but has a high probability of customization, then the developer should implement the default functionality in the process model and allow a UUT sequence to override it.

Module reuse, maintenance

The less code a test engineer needs to write the better. Whenever possible, developers should try to create their tests as reusable modules in order to reduce development time. Ray Farmer, software consultant at Nomad Technical Services, recommends leaving certain functionality up to the test executive in order to increase reusability. Test modules should initialize and communicate with instrumentation, perform analysis and capture measurement values. On the other hand, test executives should sequence test modules, evaluate measurements against limits and report on results. By limiting test modules’ functionality to measurement and analysis, the modules can be used in different types of tests (see Fig. 1 ).

Five top practices for better test software

Fig. 1. Implementing certain functionality in test modules and test executives increases code reusability.

A second way to increase the reusability and maintainability of test modules is to share instrument references across multiple tests. Instead of opening and closing references to instruments in every test, developers should open a reference once, save it as a global resource in the test executive, and use the resource in multiple steps. Because opening and closing references is only done once, each code module is not required to test and maintain this functionality.

Facilitating deployment

The first step to deploying test software is to understand and collect the software components that make up the test system. Roberto Piacentini and Hjalmar Perez, from the National Instruments Test Frameworks group, found that test software components are best organized into five main categories: test code, common components such as process models, configuration files, user interfaces, and engines or drivers (see Fig. 2 ).

Five top practices for better test software

Fig. 2. Most automated test system software is made out of five main components.

Documenting files under these categories helps developers better understand the requirements for correct deployment of each file. For example, while developers must install drivers and engines on each production machine, they can copy code modules and process model files to a folder on the production machine or a shared network drive.

Development systems sometimes hide code dependencies that need to be replicated in production systems causing run-time issues during deployment. For example, LabVIEW VIs commonly rely on subVIs in the vi.lib folder, which is not evident until developers deploy the system to production and realize that some of the system’s VIs are missing. Other languages such as C# rely on run times, which developers must also deploy. An effective deployment strategy should expose and package all file dependencies as part of the test system’s deployment distribution.

Easing validation

Validation can be time consuming for test software in highly regulated industries such as medical device manufacturing and aerospace. The first step in reducing the burden is to obtain a clear and exhaustive list of requirements. Unfortunately, thorough requirements for test systems are rare in practice, and in many cases, must be elicited by the test engineer.

Joe Spinozzi, director of operations at Cyth Systems, found that a holistic view of eliciting requirements was the best method for generating requirements documents. This process involves understanding the functionality of previous systems and opportunities for system improvement as well as the needs of the test systems’ users, such as design and test engineers and operators, through in-depth interviews.

After engineers complete test system development, validation changes proves to be as challenging as the original system’s validation. To reduce the effort of validating a change in a system’s software component, developers should try to decrease the interaction between software components as much as possible. Code modules should be as simple as possible and focus on instrument control and perform measurements. By implementing code modules using these design principles, any changes will only require revalidating the module, and not the whole system. ■

Advertisement



Learn more about National Instruments

Leave a Reply