TOC PREV NEXT INDEX  

Chapter 11

Overview of Test Bench Requirements

 

The descriptions that have been presented so far in this manual have assumed that the coverage tool is used to analyze a single results file that has been produced by a single test bench. This sounds like a very neat, tidy and very controlled environment. In reality things are very different with maybe hundreds or thousands of test benches being used to exercise different parts of the total design. This means that a verification engineer is faced with managing a huge volume of data.

This chapter therefore concentrates on explaining how a verification engineer can identify the most productive and useful test benches that should be included in the various test suites.

Basic Test Bench Construction

In its most basic form a test bench is simply a mechanism that generates a certain stimulus and applies this to a design or DUT (device under test). The response, produced by the device under test, is subsequently checked or measured against some pre-defined criteria and a decision made as to whether that particular test was successful or not. So basically a test bench contains a stimulus generator, which is connected to the device under test, and a response analyzer that could take various forms as will be described later in this chapter. Figure 11-1 shows a typical test bench incorporating a device under test, stimulus generator and results analyzer.

Figure 11-1

The stimulus generator is responsible for generating the correct sequence and phasing of the input waveforms for the device under test. This may take the form of producing a simple waveform that has a repetitive pattern or makes changes at well defined times. Free running clocks and data signals can be created using behavioral coding techniques in the Verilog or VHDL languages. Alternatively a detailed list, which could be a process block or an external data file, describing when each change takes place can be constructed and used to drive the device under test. This simple technique can be repeated, by using multiple process blocks, to generate a set of synchronised waveforms. For example, a free running clock signal, a reset signal and some data and control signals that operate in a synchronous fashion.

Complex signals that may have variations in their edge-to-edge timing relationships involve considerably more effort on the part of the verification engineer in order to achieve accurate modelling. Quite often the solution here is to include some form of random element that is used to modify the strict timing of the waveforms and hence introduce a certain degree of uncertainty with the rising/falling edges of the signals.

The other part of the problem is checking whether the results obtained when the design is simulated with the appropriate test bench are correct or not (i.e. results analysis). Although there are various ways for the results to be analyzed, probably the three most common methods are: visual inspection of the responses, visual inspection of the waveforms or using a self-checking test bench. The first two methods are very labour intensive as they involve inspecting either ASCII printouts or studying graphical information to identify when certain signals made specific changes and whether these changes happened at the correct point in the simulation. These methods are also error prone as they rely on the skills and experience of the verification engineer to detect obscure problems and understand complex interactions between signals. The third method is much more formal and is based on analyzing the behavior of the design before simulation and predicting what state a signal should be in at a certain time. The test bench is then built to check that what was predicted matches what actually happened. Although this method potentially gives the verification engineer a high degree of confidence in the quality of the final design it does involve a considerable amount of effort in developing a realistic self-checking test bench. A better and more productive solution is to use a coverage analysis tool to help `tune' the test bench to achieve maximum coverage of the device under test.

Coverage Directed Test Benches

Figure 11-2 illustrates how a coverage analysis tool can be used to complement the verification engineer's testing strategy by showing the parts of the device under test where there is inadequate coverage. The verification engineer can then make adjustments to the stimulus generator in the test bench, or modifications to the HDL code in the appropriate module or sub-unit to improve the amount of coverage.

Figure 11-2

Test Bench for a Single Module or Single Unit

Throughout this manual it has been stressed that working at the single module or single unit level is beneficial as:

… The size of the design is smaller and simulation time is shorter.
… The functionality of the design is normally easier to understand.
… The number of inputs/outputs is normally less and therefore more manageable.
… The boundaries are better defined and accessible.

The strategy of partitioning the overall design into smaller modules or units and spending time validating each block can be highly productive because, once a block has been fully validated, it can be used as a system component with little or no further detailed verification effort. For example the coverage measurements described in Chapter 6 could be used to validate a unit at the white box or open box level to prove that the control paths are correct. When the unit is used at the sub-system level and combined with other units, that have also been fully validated, they can all be treated as black boxes which means that the verification engineer can concentrate on just checking the data paths and functional boundaries between the units.

Designing a test bench for a small single unit should be fairly straightforward as the number and complexity of the signals that have to be generated is generally limited. As the unit forms just one part of the sub-system, its functionality should also be easy to comprehend and model in the test bench. This means that for a simple unit it should theoretically be possible to develop a single comprehensive test bench.

Figure 11-3

In reality it is probably found that when the coverage results are examined the initial comprehensive test bench does not actually cover the whole unit and extra tests need to be developed, as illustrated in Figure 11-3, to achieve full coverage. These extra tests may be incorporated into the original `comprehensive' test bench or may be developed as stand alone test benches to satisfy the `corner' tests. In either case the amount of effort is not unreasonable and one person or a small verification team can easily manage the final number of tests.

If the same strategy is applied to sub-system or system testing then the amount of productive work that can be achieved drops as it becomes impossible or too time consuming to create one comprehensive test bench. What normally happens is a whole series or large suites of test benches are developed by one or more people to exercise particular sections of the overall design.

Dealing with Multiple Test Benches

It was mentioned earlier in this chapter, that in reality the verification team might develop hundreds or even thousands of test benches during the testing phase of a typical project. This will occur at the module, sub-system and system level.

Figure 11-4 shows how various test benches have been used to check different parts of the overall design.

Figure 11-4

As can be seen in Figure 11-4 some of the tests check unique parts of the design while other tests simply extend how much of a particular area of the design have been covered. These are known as unique or incremental tests. Some tests check areas of the design that have already been checked by previous tests and are known as sub-sets or possible redundant tests. There are occasions when one test maps over an area that has been tested in exactly the same way as a previous test. This is known as a duplicate test and again could possibly be redundant. Projects that have large verification teams need to have effective inter-person communication channels and project management techniques in place in order to avoid creating multiple sub-sets or duplicate tests. It is sometimes impossible to avoid creating tests that map over the same part of the design. This is because quite often the logic needs to be placed into a certain state before an action can take place (e.g. a finite state machine needs to be at particular state before data can be clocked into a certain register). So two different tests may need to cycle the logic initially through exactly the same sequence in order to check out two different divergent actions that happen later in time.

Improving Your Testing Strategy

As mentioned earlier in this chapter there are a number of techniques that can be used to construct an effective test bench. For example: writing a set of unique test vectors that cause the DUT to cycle through a set of known states; generating an exhaustive set of test vectors that check all the input combinations; injecting error conditions on the inputs and checking that they are detected. Other methods include writing self-checking test benches, and applying (pseudo) random patterns to the inputs. Whichever method is used the final question that needs to be answered is: "How effective was the test bench?"

The next chapter describes how to determine the effectiveness of each test bench and how to manage and optimize a test suite that contains a large number of test benches.

Although the in-depth design of test benches is outside the scope of this manual, the reader may find the following publication useful in this respect.

Writing Test Benches - Functional Verification of HDL Models

Author: Janick Bergeron

Published by Kluwer Academic Publishers

ISBN 0-7923-7766-4


    TOC PREV NEXT INDEX  
Copyright (c) 2002
Teamwork International and TransEDA Limited
http://www.transeda.com
Voice: (408) 335-1300
Fax: (408) 335-1319
info@transeda.com