| Abstract |
This article describes some of the fundamental components of driving a coverage-driven verification project to a successful and predictable conclusion. The article starts with a description of simple and clear indicators.
The article then describes the definition, review and coding of functional coverage in order to use the indicators efficiently. The third part of the article describes the use and benefits of a verification management solution when running a coverage-driven project.
They say a picture is worth a thousand words, and that's what this section is about; defining and presenting a picture of the verification project to both the verification team and the project management, in order to focus efforts and achieve goals. The picture we want to present is an objective and quantified view of verification progress. This picture, based on global and local indicator charts, will reflect what is going on in the project at a granularity that will ensure that progress problems are observed and corrected early.
The global indicator we use (Figure 1.1) is a measure of overall verification progress. This will tell management where we are, where we should be and where we plan to be going forward. Progress on this chart is the functional coverage achieved on a weekly basis factored with the failure-rate of the tests that are run to achieve the coverage.
Figure 1.1 — Example of a global indicator chart
The data for the global indicator is based on the functional coverage results observed in simulation on all the features together. Prior to tracking progress and presenting the expected progress graph, the key stakeholders agree on the relative weights in the overall indicator of the features.
Each feature's weight is decided based on a function of the complexity and priority of the feature. Using this method, the project stakeholders develop both an acceptance of the reliability of the indicator along with buy-in to the verification process.
Once the expected graph has been published, the weekly measurements provide both a reliable indicator as well as strong incentive to meet the expectation. As the overall number is a function of the coverage collected on multiple features, a delay in a single feature can be compensated for by driving additional features forward. This allows verification managers and their teams to keep the overall progress steady and on track despite frequent setbacks in specific features.
The local indicator chart is a chart designed to show progress on a per-feature basis. The specification is divided into 20-40 subcategories and each category is tracked separately.
As you can see in Figure 1.2, this chart is used to present a picture with a level of granularity based on an objective measure of the progress in each feature. Since this is an objective measure, produced by the hardware verification language (HVL) tool, the numbers cannot be manipulated and problems are easily discernable.
Figure 1.2 — Example of a per-feature progress chart
When presenting the local indicator chart, each feature not in line with expectation is highlighted and an explanation must be provided. This allows project managers to understand the specific problems causing delays. Once problems are understood, apparent problems can usually be resolved quickly.
The affects of the local/global chart pair are far-reaching. The verification and design team working together to drive the coverage forward have a tangible, attainable, and team goal which they all work toward. Since this is the primary project indicator it ensures problems get the focus to be resolved quickly. As the goals steadily progress on all fronts at the same time, a fantastic amount of positive energy and problem solving ingenuity ensues.
So how do you get there?
In order to track a project based on objective measures, a coverage-driven verification plan is defined. This plan defines all the verification goals up front in terms of "functional coverage points." Each bit of functionality required to be tested in the design is described in terms of events, values and combinations thereof.
This process is accomplished by methodically reviewing the device under test (DUT) specification and associated documents, and is augmented with definitions by the design team of functional coverage of the micro-architecture. The resulting coverage plan is then carefully reviewed to ensure that it encompasses the required functionality. Attaining the coverage described in the plan means all the functional testing requirements for tapeout have been accomplished.
Once the functional coverage points are defined, they are coded into the HVL environment so that subsequent runs can be measured for the coverage they accomplish. Each functional coverage chapter is defined by the group of functional coverage points defined for the feature. The chapters are assigned weights as described in the previous section, and tracking can begin.
Once the coverage plan is complete and tracking has begun, the focus shifts to accomplishing the coverage. The reason it is called "coverage driven testing" is that from this point, coverage results drive all activities. Coverage and regression results are carefully analyzed to focus and refocus efforts.
Fixing bugs, releasing constraints, and improving the test environment are all activities aimed at driving coverage forward. Likewise, resolving problems like RTL bugs, script issues, license shortages, and computing resources are all focused on breaking through barriers which slow progress on coverage.
In a coverage-driven project, each verification engineer is focused on achieving the tangible results for his features. Since functional coverage is objective and quantifiable the engineers become result driven. When one of the features gets stuck, the engineer shifts to the next feature. Problems that stall progress are highlighted and resolved quickly.
Verification management tools
Managing a coverage-driven verification project has a significant positive impact on quality, predictability and time-to-money. However, it stretches the conventional scripts and analysis tools beyond their limit, leaving a lot of manual work to be done.
A verification management automation tool should be used to increase the overall productivity of this process. This type of solution should have the following features:
Running and tracking regressions
- Running and tracking regressions
- Analyzing Coverage
- Optimizing regressions
In a coverage-driven verification program, typically there is a 10X or greater increase in the amount of simulations run on a daily basis. The increase in simulations is usually an optimization of existing resources so that they are more fully utilized. This way, tool licenses and computers which have grown accustomed to having nights and weekends off are exercised around the clock. This increase, while providing deeper coverage of the design and high quality bugs, also creates a whole bunch of information that needs to be managed. In order to track completion of test suites, parse and distribute failures, and verify the failures are fixed, the management platform should provide the toolkit necessary to handle the mass amount of tests and reduce the need for individuals to do this manually.
Each engineer on a verification team using a management automation tool will have a list of failure types which are assigned directly to him. This means that either based on error type or test name, test failures are assigned to individuals automatically by the tool.
Following a nightly regression, all failures will be categorized by failure type, sorted by cycles-to-failure, and distributed to the engineers so they can debug and fix testbench problems or assign the failure to the RTL design team. When fixes are completed the engineer will indicate to the tool that the fix was made. The automated tool will validate each fix reported by rerunning the simulation in the subsequent regression.
As the project progresses, the focus shifts to analyzing the coverage in order to both focus effort on the coverage holes and in order to produce weekly indicators. To increase accuracy, and save managers and engineers the time and resources required for this type of analysis, the management tool should provide capabilities for all of the following:
An analysis tool, facilitating a system where coverage can be prioritized, masked or viewed in different perspectives, will open up a whole new array of possibilities in managing phased projects. For example, if the first spin of the design is just for a demo, a smaller percentage of the functionality of the design needs to be validated to release the RTL. In that case 100% coverage can be defined by choosing just the coverage points are needed.
- Defining priorities and weights per feature
- Defining perspective views for intermediate milestones and users interested in specific features
- Identifying the constraints that cause functionality not to be achieved
If the subsequent spin is defined to be more sensitive to time than features, the coverage set can defined to include only the priority-1 features, leaving the subsequent features to the next spin. Overall, the coverage can be monitored by different stakeholders based on their priorities and perspectives, while the verification team can focus on the goal at hand without having to design several separate plans.
One of the bottlenecks toward the end of a coverage-driven program is the amount of computer time and license resources required to produce the coverage. A management automation tool should provide a means to identify optimal tests for achieving the coverage. This way the cycles that are run can be focused to help reach goals faster and more efficiently.
Functional coverage is the best indicator of the efficiency of a random test. Therefore, running hundreds or thousands of tests which do not contribute to the coverage is likely not the best use of resources. To find optimal test suite, key tests are graded for their efficiency in providing coverage.
Redundant tests can be eliminated and certain tests can be marked for running only once, or for running multiple times. This enables finding the optimal regression for running in random, a regression which covers the broadest feature base in the fewest tests. Running that regression in random is likely to be a significant savings in resources.
A tool which enables identification of the optimal test suite, based on the coverage perspectives described above, will also allow creation of several smaller regressions which can be used for multiple purposes. A feature-regression is a subset of all the tests, which covers an entire feature in a minimum number of runs.
This regression is run before a check-in of files modifying a feature's code. A mini-regression is a subset of highly stable tests that establish that each of the features is still alive. This is used before any top-level or modeling change. Also, like in the example from the previous section, a spin-1 regression can be defined to retain 100% of spin-1 coverage in case the team decides to re-release the RTL at a later date and wants to maintain the a consistent level of quality.
Overall, the use of a management automation tool should drive many of the manual tasks done by the verification team and managers as well as open the doors to new possibilities in managing verification projects efficiently.
What I've shown here is an example of presenting a simple and clear set of indicators which can be used to drive projects to predictable and healthy execution. In order to measure these indicators, methodical work needs to be done to define, review and code the functional coverage from an early part of the project.
In the later stage of the project, when running, fixing bugs and collecting coverage, several new challenges are presented. I've suggested the definition of a verification management automation solution with the ability to track regressions, analyze coverage and optimize regressions. Using this type of solution should make the project flow smoothly and focus engineers and managers on the verification tasks.
Akiva Michelson is co-founder and chief technical officer of Ace Verification. He has specialized in functional verification of HDL designs for the past eight years. His experience verifying over a dozen projects for Digital Semiconductor (1996-1998) and Intel Corporation (1998-2004) has allowed him to design, collect and improve numerous verification approaches.