Advertisement

Choosing among test automation solutions

XCHECK.SEP–RP

Choosing among test automation solutions

An increasingly important part of the design process, test automation
offers many choices to the designer

BY VIC KULKARNI CrossCheck Technology, Inc. San Jose, CA

Today's electronic systems are typically two to three orders of magnitude
greater in performance and functionality than systems of a decade ago. In
addition, quality standards continue to rise along with the demand to get
products to market faster. As a result, test must be considered throughout
the design process, especially for complex application-specific ICs, and
not as a point solution or an afterthought (see Fig. 1). System designers
have many test development and diagnostics alternatives from which to
choose. However, while any solution might appear to offer the right sort
of features and functionality to solve the test problem, designers must
choose carefully. Since test development can occupy as much as 50% of the
total design cycle, the wrong choice could adversely impact a product's
time-to-market.

Test: then and now Traditionally, designers have used whichever test
methodology they felt would provide them with the desired test coverage.
With such ad hoc testability methods, designers determined a specific test
and diagnostic strategy for a given circuit as the need arose.
Partitioning has been used as one of the most popular ad hoc testing
methods among designers. Partitioning facilitates testing by disconnecting
one portion of a network from another. While this technique may have been
effective at one time, it is inappropriate with today's large, complex
designs. Major breakthroughs in test methodologies have come in the form
of new test solutions that automate the way in which large ASICs and
systems can be tested. One of the most important developments was software
that automatically generates vectors and test patterns, which use those
vectors to thoroughly test and analyze a system for defects. However,
even with these tools, obtaining adequate fault coverage without
compromising quality is difficult within normal design time constraints.
In addition, automatic test program/pattern generation (ATPG) tools
require that additional design-for-test (DFT) structures be added to the
design to increase the controllability and observability of deeply buried
logic. In many high-volume applications, where silicon area is at a
premium, designers do not want to add DFT structures, but still demand
high-quality test coverage. For these requirements, new techniques
continue to emerge, such as quiescent-current testing (IDDQ ) that
targets cost-sensitive designs.

Choosing a solution No single solution exists for all test needs–that
is, one that can optimize multiple test requirements simultaneously.
Hence, the designer must decide what is important in a system design, end
application, and the customer's environment before starting the project.
In addition, the designer must decide how important it is for the test
solution to be compatible with established design flows, and how it works
with other EDA tools and industry standards such as Verilog, EDIF, VHDL,
and the IEEE 1149.1 boundary-scan standard. Typically, a designer must
consider several issues:

* How important is design transparency–that is, automation of test
generation without modifying the netlist? * Is the designer creating a
synchronous and asynchronous design? * How will the test solution impact
– Circuit performance – Silicon area and cost – Test program time per
unit on the manufacturing floor? * What is the desired quality of the
design after test in terms of manufacturing-defect coverage? * Does the
test methodology support diverse design styles such as synthesis,
schematic-capture, presence of RAMs or embedded functions (megacells).

Test alternatives Designers will find that, despite the breadth of
products being offered by vendors to meet their test-automation
requirements, all test solutions fall into five main categories:

1. Scan. Partitions chip into combinational blocks and storage elements.
Ideal for synchronous designs. Test solution suppliers now offer products
that fall into the category of full- and partial-scan software. 2.
IDDQ . The measurement of quiescent current to detect defaults. Most
available IDDQ products require DFT structures, although
non-DFT-based products do exist. 3. Embedded test. Designer-transparent
ATPG and diagnostics for rapid time-to-market gate arrays. This solution
employs on-chip test electronics that are incorporated into the base
arrays provided by the IC supplier. 4. Built-in self-test (BIST).
Test-pattern generation and test-compression circuitry are built onto the
chip. 5. Boundary scan. Isolated individual ICs for in-situ test, and to
test board-level interconnects.

Scan-based ATPG Scan is a test automation solution for synchronous
designs. It is a proven DFT technique that addresses high fault coverage,
automated test-pattern generation, support for failure diagnosis, and
design flexibility. Scan design is a technique that partitions the
circuit into combinational blocks and storage elements (see Fig. 2). The
storage elements are designed so they can be configured to form one or
more serial shift registers (or scan chains) during the test mode. This
technique involves adding scan capability to each storage element and
connecting the chain appropriately. The scanable storage elements
function as serial shift registers during the scan test operation. During
normal functional mode, the scan elements operate as regular flip-flops or
latches. The partitioning process creates a circuit that increases the
observability and controllability of the internal logic. To detect a
stuck-at fault, it must be possible to sensitize the fault and propagate
its effects to a scan element or a primary output pin. To ensure
high-fault coverage and reliable scan test operations, designers must
ensure synchronous operation, avoid unstable circuit states, and ensure
correct operation of scan chains during testing. A full-scan approach is
the best method for achieving 100% stuck-at fault coverage in a minimum
time. Full scan is typically employed for testing large, complex designs.
However, designers may choose to trade between silicon area and fault
coverage using a partial-scan approach. ATPG tools that offer partial-scan
capability include sequential algorithms to generate tests to propagate
through no-scan logic. Scan ATPG products usually interface with existing
design systems through standards such as EDIF, Verilog, and VHDL. In
addition, they include test synthesis tools that insert scan circuitry
into existing designs to assist in the migration toward a fully automated
test solution.

Quiescent-current testing Recently, IDDQ , or quiescent-current
testing, has emerged as a viable supplemental test approach to functional
testing. It allows users to increase the quality of their functional
vector set because it can uncover many defects that are undetectable using
stuck-at methods. In IDDQ testing, designers apply a series of input
stimuli to the CMOS device under test while monitoring the current that
passes through the device's VDD supply voltage terminal. As defects
often appear as abnormal voltage supply currents, they can be detected
using this method, and thus the presence of many kinds of manufacturing
defects can be determined. Examples of defects that cause a quiescent
current to flow in a circuit are shown in Fig. 3. IDDQ is an ideal
test methodology for cost-sensitive ICs. It can be used with other test
methodologies such as scan or as a standalone application. If IDDQ is
used as a standalone application, it isn't limited to synchronous designs
or the use of stuck-at only models. Used alone, it is ideal for designs
without DFT structures, such as scan cells or embedded-grid technology.
Typical applications include high-volume, low-silicon cost situations
where silicon area is at a premium. Specific benefits of IDDQ
testing include detection of more defect types–for example, metal
bridging, gate-oxide short, and open polysilicon–than stuck-at-testing;
few design-style restrictions; no area, power, or performance overhead;
and higher defect coverage without DFT overhead. In a defect-free MOS
transistor, no current should flow between gate and source, gate and
drain, or gate and bulk. A defect, which causes IDDQ to exceed the
empirically determined limit, is identified as a leakage fault. The logic
state of the circuit-under-test determines whether defects are detected.
In a conventional stuck-at fault simulation approach, a defect can be
detected only if it can be propagated to, and observed at, the primary
outputs. With the IDDQ approach, observation is at the power supply
current terminal, which in turn is connected to each of the macro cells in
the circuit. Thus, an increase in IDDQ indicates the presence of a
defect in a macro cell for a given test vector. This increase results in
rapid fault simulation and grading of the selected vectors, since only
fault sensitization and not propagation is necessary. CrossCheck's
CurrenTest is an example of an IDDQ test development software
solution that does not use DFT, thus imposing no area, performance, or
power penalty (see Fig. 4). From the user-provided functional vectors, the
software automatically analyzes the vectors for invalid circuit states
that may cause IDDQ to flow, and selects a few vectors from the
remaining valid vectors for IDDQ measurements. The selected vectors
are then fault-graded using transistor and net-level defect models.

Embedded test Embedded-test approaches are used to render the
test-development task transparent to the designer. A typical embedded-test
approach incorporates massive observability via the use of thousands of
test points in the base array. This eliminates the need to propagate the
effects of faults to primary outputs, reducing vector length and test
time. ATPG is effected through controllability elements provided in the
ASIC manufacturer's library. The elements are addressed and loaded using a
grid embedded in the base. They are thus quasi-independently controllable,
allowing ATPG for asynchronous as well as synchronous circuits. With
embedded-test technology the ASIC manufacturer undertakes test
development. No DFT or other netlist modifications are required. The
technology also provides negligible impact on circuit performance, area
impact similar to scan, detection of real manufacturing defects, and
automated debug. With embedded test, the designer completes the design
using the ASIC vendor library, using a conventional design flow. Thus, other
functions such as simulation, timing analysis, and logic partitioning are
completed in a typical manner. Upon completion, the designer hands the
netlist to the ASIC vendor. The ASIC vendor then places and routes the
design netlist and generates a test program. During test generation, the
circuit is placed in a pre-determined state by applying test patterns
through the primary inputs of the design and controlling the internal
storage elements. Probe lines are then activated in a sequence by the test
controller. An activated probe line then turns on a row of test
transistors, allowing transfer of the internal logic values from the
macrocells onto the sense lines. The values are then received by the data
register and compressed to form a signature. The signature is brought out
to the outside world through a test-data-out pin. In the control mode,
the CCL structures provide write access to all of the internal storage
elements of the design. The test transistors, which are employed to
observe node values, are also used to inject value back into the circuit
node during the write mode. Thus the output of each CCL flip-flop is
controlled to the desired value of 0 or 1, independent of the nature of
the design. Since all gate outputs can be observed, fault propagation is
not necessary, and the embedded-grid approach reduces the problem of
sequential test generation to one of controllability. The CCL further
reduces the sequential controllability problem to a combinational controllability once all internal flip-flops can be controlled.
Combinatorial test generation algorithms are applied and the task of test
development now becomes transparent to the designer.

Built-in self-test (BIST) Even when scan-based DFT or embedded-test
techniques are used for complex designs, test generation can still be time
consuming for very complex circuits. In addition, expensive automated test
equipment (ATE) is required to apply the test patterns created by these
approaches. This has led to the increasing use of built-in self test
(BIST), in which test-pattern generation and test response compression
circuitry is built into the chip so the IC can test itself. BIST has been
shown to be a cost-effective test solution for ASICs when life-cycle costs
are considered. BIST is usually used in circuits requiring at-speed
self-test within the system environment. It is also used extensively for
testing embedded mega-functions such as RAM and ROM, which are generally
excluded from the realm of other DFT methods. In the most well-known BIST
approach, registers in the circuit are implemented as built-in logic-block
observers, which are linear-feedback shift registers (LFSRs) that can be
used both as on-chip, pseudo-random, test-pattern generators and response
compressors. However, these have 25% to 30% or greater silicon area
overhead. Centralized BIST approaches are based on the scan-path concept
and involve the use of a global LFSR to generate pseudo-random vectors
that are applied to the core logic via either an internal or a boundary-scan
path. The responses of the core logic are captured in the scan path and
compressed in another global LFSR. The fault coverage achieved tends to be
low unless some other DFT techniques are used. In the circular BIST
approach, area overhead is minimized by employing a single LFSR, which is
used to generate and compress patterns, along with an internal scan chain.
To keep performance and area overhead to a minimum, not all the internal
registers are made scanable. Thus, the fault coverage tends to be lower
than expected.

Boundary scan With boundary scan, a ring can be placed around the
periphery of a chip. The ring is placed near the chip I/O drivers to make
the chip I/O signals both drivable and observable. This boundary-scan
ring (see Fig. 5) allows the chip to be tested when mounted on the pc
board using the same ATPG pattern that was used for wafer-probe chip test.
It also allows the chip's internal logic to be disconnected from the chip
I/O so the board wiring can be tested by driving data into the boundary
scan on one chip and observing the results with the boundary scan on all
receiving chips. Boundary scan is an ideal solution when testing circuits
at the board level. For this method to be truly effective, a large
percentage of components on the board must have boundary-scan schemes.
Several manufacturers have developed their own boundary-scan and
test-access methods for chips. Most boundary-scan methods include the
IEEE 1149.1 standard. The standard defines four or five signal test access
ports, requirements for boundary scan, an instruction register, and a
standard subset of instructions, a bypass register, and optional device
identification register (see Fig. 6). It also provides for optional
user-defined scanable registers. These registers plus the instruction
register constitute a convenient means of BIST implementation and control.

CAPTIONS:

Fig. 1. Rich Pell will write as required.

Fig. 2. Rich Pell will write as required.

Fig. 3. Rich Pell will write as required.

Fig. 4. Rich Pell will write as required.

Fig. 5. Rich Pell will write as required.

Fig. 6. Rich Pell will write as required.

Advertisement

Leave a Reply