Combined coverage methodology speeds verification
Combined coverage methodology speeds verification
By Sharon Rosenberg, EEdesign
March 14, 2003 (8:42 p.m. EST)
Engineers are creating designs larger than ever before. As gate counts exceed one million, verification methodologies have failed to adjust, turning functional verification into the main bottleneck in the design process. The magnitude and complexity of recent designs introduces new challenges, including reducing the time it takes to do verification, and ensuring complete verification.
Historically, engineers have used a few coverage metrics -- such as toggle coverage, bug rates and code coverage alone -- to evaluate the verification process. However, these metrics have limited capabilities for ensuring that their design functionality has been completely covered.
What engineers need is a more complete coverage methodology -- one that encapsulates a combined coverage metric-functional coverage, code coverage and assertion coverage. These measurements are highly complementary. For instance, you can achieve one hundred percent functional coverage o f your design, and still not exercise some parts of the HDL code. Conversely, you can have one hundred percent code coverage, but miss testing corner case functionality of the design.
Missing assertion coverage is an indication that "error" behavior has not been sufficiently exercised. A methodology based on all types of coverage should be used from early stages in the verification process, and is necessary to ensure complete coverage of the design.
Types of coverage
Code coverage reflects how thoroughly the HDL code was exercised. A code coverage tool traces the code execution, usually by instrumenting or modifying the HDL code, and tells you which lines of the RTL design have been executed, and which have not.
The set of features provided by code coverage tools usually includes line/block coverage, arc coverage for state machines, expression coverage, event coverage and toggle coverage. More recent code coverage tools also include capabilities that automatically extract finite state machines from the design and simplify the task of ensuring complete states and transition coverage.
Code coverage is a necessity. Most people would agree it is unacceptable to synthesize code that is either dead or unverified. Nevertheless, code coverage is not enough. Most functional scenarios cannot be mapped into lines of code. For example, code coverage cannot indicate whether we've been through all the legal combination of states in two orthogonal state machines.
Another example might be whether we have tried all the possible inputs while the design under test (DUT) was in all the different internal states. Also, code coverage does not look at sequences of events, such as what else happened before, during, or after a line of code has been executed. Thus, code coverage does not ensure completeness and does not fulfill most of the requirements that allow expediting the verification task.
In general, three inherent limitations of code coverage tools are: overlooking non-implemented features, the inability to measure the interaction between multiple modules, and the ability to measure simultaneous events or sequences of events.
On the other hand, functional coverage perceives the design from a user's or a system point of view. Functional coverage tells you which functions have been tested, and which have not. For instance, have you covered all of your typical scenarios? Error cases? Corner cases? Protocols? Functional coverage also allows questions such as, "OK, I've covered every state in my state machine, but did I ever have an interrupt at the same time? When the input buffer was full, did I have all types of packets injected? Did I ever inject two erroneous packets in a row?"
The bulk of low-level details may be hidden from the report reviewer. Functional coverage elevates the discussion to specific transactions or sequences of transactions without overwhelming the verification engineer with bit vectors and signal names. This level of abstraction enables natural translation from cover age results to test plan items. Figure 1, below, provides an example of functional coverage in an environment that creates many simulation scenarios.
Figure 1 -- Functional coverage serves multiple simulation scenarios
Functional coverage provides an excellent indication of how we're meeting the goals set by the test plan. However, it may not correlate exactly to the actual RTL implementation, which may have diverged over time. For example, code coverage results can find a "hole" in the test plan -- functionality that is implemented in the RTL, but never targeted by the test plan. Therefore, code coverage and functional coverage are complementary.
Table 1 illustrates how functional coverage and code coverage correlate to each other, and how the combination of both provides a much more reliable indication of complete coverage.
Table 1 -- Correlation between functional coverage and code coverage
Assertions are representations of properties used during simulation as protocol checkers. They can also be leveraged as properties to be proven with formal and semi-formal tools. When they are used in simulation, there is the potential that a test, or a suite of tests, may not come to the condition that the assertion is looking for.
More importantly, since many assertions can be triggered by complex conditions, it's important to know what triggered the assertion and whether or not an error was reported. Assertion coverage gives a clear understanding of what potential error behavior was covered and properties and conditions that have not yet been exercised. While error behavior is important, assertion coverage adds incremental value and is not a sufficient metric for verification on its own.
As all types of coverage are complementary in nature, a tool or methodology th at combines approaches is extremely beneficial. As mentioned earlier, this combined methodology will provide a complete overview of the verification progress and a clearer correlation between the functional coverage definitions and the actual design implementation.
The requirements for coverage can be categorized into two groups: demands for the data gathering and analysis engine, and requirements for the surrounding testbench that will allow efficient usage of the accumulated information.
Coverage engine requirements
Expressiveness of queries
Clearly, expressiveness to define any query is a major requirement. It should be easy to specify a query, and get the needed information. It should also be easy to filter out the relevant details. Specifically, a functional coverage tool should handle high-level queries such as: have I exercised all types of packets while handling an interrupt?
Getting coverage results should be readable and intuitive. Both a textual user interface and a graphic user interface (GUI) should be provided. Usually, engineers use the GUI since it provides an easy means to review, query and print the coverage database. The textual interface is useful when trying to forward the results to other automatic tools or manipulate the data into custom reports.
Efficient coverage analysis
When coverage results are less than satisfying, it should be easy to deduce the appropriate adjustments and generate tests to improve the coverage results.
The ability to accumulate and analyze coverage reports from multiple simulation runs is crucial. Test suites today comprise of large number of tests. This ability to analyze cumulative coverage allows you to:
Timing of analysis
- Get an overall picture of the entire verification environment, and measure your recent progress. Using these measurements, you can objectively predict the tape-out date.
- Avoid test redundancy. By measuring the amount of coverage added by each test, redundant tests can be identified and removed.
The coverage tool should allow the engineer to analyze the coverage information both between simulations and during a test run. The first approach requires the ability to save the collected information to be reviewed later on. The second approach requires a run-time interface to the coverage database that allows it to be used during simulation.
Various coverage holes may be prioritized and the overall progress can be better represented by setting individual goals for each concern (how many times must I cover each scenario?), and by setting weights to distinguish the relative importance of different coverage scenarios.
Optimizing the test suite
Ranking capabilities should allow you to create a subset of tests that verify the DUT to a significant degree, with a minimum amount of resources. Running this subset of tests instead of your entire test suite drastically reduces the total number of cycles needed for verification.
Intellectual property reuse and design complexity have turned our verification environment into a mix of design representations and verification languages. Having the flexibility to use the same methodology on all types of designs is critical.
In addition to coverage tool requirements, some verification environments allow extensive use of the collected data while others enable merely partial utilization of the coverage results. For a more comprehensive coverage environment, the methodology should allow:
- Self Checking: A self-checking verification environment notifies you once a malfunction has occurred.
- Test Generation Control: Being able to easily target coverage holes using automated test generation adds great efficiency to the use of coverage metrics.
- Repeatability: It is important that as the design and the testbench evolve, that the current test suite with the current design still covers all of the functional coverage specified, as well as complete code and assertion coverage. Test generation engines that have "random stability" minimize the disruption of small changes to the overall coverage.
The recommended approach to coverage
The following flow incorporates all coverage metrics. These guidelines provide a more complete metric and methodology that can be examined through the various phases, while steering the verification process towards a rapid completion.
Phase one: Test plan
A good test plan should list all the interesting test cases to verify the design. In specific, it should include all configuration attributes, all variations of every data item, interesting sequences for every DUT input port, all corner cases to be tested, all error conditions to be created and all erroneous inputs to be injected.
An encompassing test plan is a good start to ensure complete verification. Experience and creativity can be used to identify areas that are prone to bugs. It is advisable to consult people with extensive verification background and the designer to get internal insights into the design.
Note that no test plan can cover every possible bug, which highlights the importance of directed-random test generation. However, a good test plan is still essential to an efficient verification strategy and becomes the basis for a functional coverage model.
Phase two: Functional coverage specification
Define what should be covered. Decide on the interesting data fields/registers. Define separate buckets for legal values, illegal values, and boundary values, such as corner cases. Examine both interfaces and internal states. Choose the state registers and state transitions of important state machines. Identify interesting interactions between some of the above states or data, such as, the state of one state machine relative to another, or to a value of a signal. The functional coverage specification is, in essence, an executable fo rm of your test plan.
Phase three: Build the testbench
Build your environment parameterized in a way that each test can direct it to a specific area of concern. This enables you to use the coverage results and translate coverage holes into new tests. At this point your functional coverage and assertion coverage code should be written. Make sure that the verification strategy you have chosen is suitable for the entire verification needs.
Phase four: Writing tests and simulation
Write tests and run them. Try to enhance your test suite by using the iterative process of analyzing coverage reports and adding additional tests to fill the uncovered areas. From time to time, update and optimize your regression suite using the ranking capabilities. There is no need to frequently run tests that have only marginal contribution to the verification process.
Note that from the beginning, the best tests are directed-random tests. In other words, your tests should be targeted at a specifi c area, but anything that need not be specified should be randomized. By changing the random seed, each test can become thousands of tests, each testing the same target from different paths and randomizing data. This is the most efficient way to increase coverage and find bugs!
Phase five: Code coverage integration
Once your RTL code is mature enough, add in code coverage. Start with block coverage. Unreachable code should be carefully analyzed; it may save time to ask the implementer to identify the code's functionality. Dead code should be removed. In cases of reachable non-exercised logic, identify the untested scenario and write tests or constrain the test generator to fill coverage holes.
At the same time identify the untested functionality. Once identified, review your test plan and make sure it is not an overlooked area in the test plan. Update the test plan and functional and assertion coverage code as necessary.
Phase six: Thorough coverage
At this point you have ho pefully achieved one hundred percent block coverage. Use other code coverage metrics (expression, event, state machine, arc and toggle coverage) to achieve as high coverage as possible within your allotted time.
Phase seven: Regression testing
Throughout the process, regression suites that maximize code, functional and assertion coverage should be created. Regressions can be created per functional area or to fit timeslots, such as overnight or weekly regressions than may run 60 hours, potentially on multiple servers. It is critical to leverage compute and simulation resources to maximize coverage and find bugs faster.
Coverage is key in the verification of today's complex designs. Knowing what has been verified, how it relates to what needs to be verified, and where are the holes, adds precision, efficiency and predictability to the verification process. Functional, code and assertion coverage, being complementary in nature, are all needed to provide a reliable and th orough coverage metric. When combined, they facilitate a coverage-driven verification methodology, to find more bugs, faster. This in turn saves significant human and machine resources, shortens time-to-market and eventually contributes to a mature and timely tape-out decision and a high-quality product.
Sharon Rosenberg is a Senior Consulting Engineer at Verisity Design, Inc. Over the last few years since he joined the company, Sharon has held various positions in Verisity, including research and developmentfield work with Verisity's customers, verification methodology development and technical marketing.
Copyright © 2003 CMP Media, LLC | Privacy Statement