Advertisement

In-the-loop testing aids embedded system validation

In-the-loop testing aids embedded system validation

Software complexity demands early system analysis via Model-Based Design

BY GORAN BEGIC
The Mathworks, Natick, MA
http://www.mathworks.com

Embedded systems software can be a big competitive advantage. For a vehicle, software can make an already comfortable ride even more appealing and better than that of a competitive product. It can also reduce cabin noise or reduce fuel consumption. Some of this functionality could potentially be implemented in hardware, but then it would increase manufacturing cost and product price. Software also enables reuse and can be updated more frequently to satisfy user needs.

All these and many more reasons drive complexity of embedded software applications. Complex embedded software functions are difficult to test and verify. The verification and validation overhead and cost of fixing of defects and implication of defects that were not detected on time threaten to wipe out all the benefits that software features provide.

The embedded software development industry recognized and embraced development based on graphical models as a way to deal with increased complexity. Coupled with simulation, graphical models of product functionality are an opportunity to improve verification and validation processes.

Developing and testing with models

Modeling is a step in design and development that happens after collecting high-level requirements and before any implementation. Models also allow testing and verification to be done continuously, in parallel with system design and implementation.

In-the-loop testing aids embedded system validation

Fig. 1. Verification with models overview

In the early design stages, one can develop purely behavioral models to clarify and define detailed low-level requirements. Such models may have the basic architecture of the solution, but they are independent of the target platform. These target-independent behavioral models are used for design verification and early requirements validation. A model used to capture key requirements and demonstrate correct behavior in simulation, as well as to demonstrate traceability to high-level requirements, is often referred to as executable specification.

Further development of the executable specification and addition of implementation details lead to the definition of the model that represents a final implementation. Often such a model is optimized for code generation. It honors the data types, the target architecture, and even the required coding style. Changes require a verification process that ensures the change introduced in the model for production code generation didn’t change the behavior of the model. In this article we refer to the testing that determines correct behavior of the implementation model and of the generated code as code verification. While this article is focused on software, the same basic concept applies on hardware implementations.

Testing that determines correct behavior of the model for production code generation and of the generated code is code verification. Distribution of the verification effort in design verification and code verification allows an early start of the verification process, more focused testing, and shorter time to fix the problems detected during testing. In the remainder of the article, we will discuss different testing methods for design verification and code verification:

• Model testing in simulation

• Software-in-the-loop testing

• Processor-in-the-loop testing

• Hardware-in-the-loop testing

Model testing

Unlike “static” paper designs, an executable specification can be evaluated in simulation. Typically this is done by changing model parameters, or input signals, and by reviewing the outputs, or responses, of the model created in simulation. The behavior of software functionality captured in simulation must meet the expectation specified in requirements.

Signal inputs and model outputs are represented as time series data (sequences of data points in time). Typically such test vectors can be derived from requirements, but test data can also come from measurements done on existing systems, or from a model of the physical system that embedded software interacts with.

Testing methods where inputs of the software model are created with help from other models are sometimes referred to as “model-in-the-loop” testing. The term “in loop” comes from control system applications with a feedback loop between an embedded controller model and the physical model being controlled. An example would be a model of cruise control software that controls a model of the vehicle engine. Such a system can also include a model of the vehicle dynamics, the test track, and so on. An example of test data in this case can be a series of test vectors representing the drivers input in the cockpit of such a virtual vehicle.

A model not just of the functional design of the software, but also of the environment around the controller, links to high level requirements, and other documentation, test vectors, and expectation results becomes an “executable specification.” Results of model testing in simulation are used to verify that software behavior is correct and that it validates requirements used as a starting point of the development process. Information collected through simulation becomes the benchmark for code verification.

Software-in-the-Loop

Testing the software algorithm, where functions of generated, or hand-written code are evaluated in co-simulation on the host machine, is called “software-in-the-loop” (SIL) testing. As in model testing in simulation, input test vectors can come from requirements or other models in the executable specification. SIL tests typically reuse the test data and model infrastructure used for model testing in simulation.

This type of verification is particularly useful when software components consist of a mix of generated code (for example, updates to meet new requirements) and handwritten code (for example, existing drivers and data adapters) that may be necessary for execution on the target platform.

In-the-loop testing aids embedded system validation

Fig. 2. Platform independent testing in simulation

SIL testing is also used for verification of re-implementation of existing algorithms in graphical models. Some of the legacy code, while correct, may be difficult and expensive to maintain and it makes sense to re-implement and verify it in a graphical environment. In this case models and simulation are the test framework for comparison of outputs of the new model implementation.

Processor-in-the-Loop

A good starting point in verification of compiled object code on the target platform is verification that ensures functional equivalence of code running on the target processor relative to the model behavior captured in simulation.

Conceptually, processor-in-the-loop (PIL) testing is similar to SIL. The key difference is that during PIL, code executes on the target processor or on an instruction set simulator. The data passed between the harness model and the deployed object code use real I/O via CAN, or serial devices. The model harness used for testing and SIL is now reused as a test execution framework for the processor board. With PIL, tests are executed while having co-simulation of the embedded algorithm compiled and deployed on the target processor board and the existing executable specification.

In-the-loop testing aids embedded system validation

Fig. 3. Platform dependent testing

Besides tests for functional equivalence, PIL also lets us step through the assembly-level instructions in the debugger and analyze code that was compiled, linked, and deployed just as it will run on the real system. With PIL we can also review the order of execution of code functions as well as verify calls to OS functions, or other libraries required for the execution, on the target and we can monitor the memory footprint during execution of verification scenarios. In some projects, PIL is an opportunity to compare algorithm behavior on processor boards that meet the same spec. but come from different vendors.

Hardware-in-the-Loop

All the methods mentioned so far can’t be used to verify real-time aspects of the design. The overhead of simulation and communication with the target board doesn’t allow for real-time testing of the algorithm.

To reuse data created via the methods described earlier, and to continue using the results as a guideline and a benchmark for real-time testing, we can generate code for the environment of the software model and deploy it on a real-time target. Such a configuration reduces the risk of testing on the actual and often expensive device (when available). This type of verification needs sophisticated signal conditioning and power electronics to properly stimulate inputs and receive outputs of the target hardware. HIL is typically done in the lab as a final test before system integration and field-testing. Each of the described In-The-Loop testing methods address a specific group of problems that occur in development of embedded systems and each of the methods brings a certain set of benefits. The extent of verification and validation rigor and application of the described methods may vary from project to project. ■

Advertisement



Learn more about The MathWorks

Leave a Reply