Regular browsers of bookstores no doubt have seen the popular series offering "the rules" for love and marriage. Rules are important in many other endeavors, too, including most aspects of engineering. One area that could certainly benefit from better rules is functional verification, which consumes the majority of the effort on most virtual-component (VC) and system-on-chip (SoC) projects. While EDA vendors may provide detailed instructions and methodology guides for their specific tools, there has not been much industry activity in establishing more general rules, guidelines and best practices.
Recognizing this need, a Virtual Socket Interface Alliance development working group, with representatives of more than 20 leading semiconductor, system, VC and EDA suppliers, has developed a specification for the functional verification of virtual components and the SoC designs that use them. The participating companies have included Cadence, Elix ent, Fujitsu, Hewlett-Packard, IBM, Infineon, Intel, Mentor, Motorola, Synopsys and Verisity. The specification:
- Discusses best practices for effective functional verification of a virtual component;
- Defines the verification-related deliverables that should pass from the VC provider to the VC integrator (SoC team) along with the VC design itself;
- Provides a set of detailed rules for these deliverables, including acceptable formats and coding guidelines;
- Describes the process of using these deliverables to verify the integrity of the delivered VC;
- Outlines how certain deliverables can be reused during full-chip SoC verification; and
- Includes a glossary of terminology related to VC and SoC functional-verification methods.
The scope of the deliverables is broad, ranging from traditional elements such as testbenches to assertions, formal-verification constraints and other information for emerging verification techniques. After each deliverable is defined, the spec lists the acceptable formats and the applicability for different forms of VCs (RTL vs. hard macro, for example). A detailed set of rules follows, ranging from one or two for some deliverables to more than 90 for testbenches. Each rule includes a detailed justification so that both the VC provider and integrator understand why it is important.
One example of a deliverable is a report of code-coverage metrics. The specification first defines code coverage and notes that common metrics include toggle coverage, statement coverage, branch coverage, condition coverage and finite state machine coverage. There is no standard format for such a report, so Microsoft Word, PDF, HTML and plain text are listed as examples of acceptable formats. It is then recommended that the VC provider use code coverage to help assess the extent to which the VC is exercised during simulation.
Code-coverage metrics also help the VC integrator judge the quality of the provider's functional-verification process, so the spec recommends that code-coverage reports be among the deliverables provided with any form of VC.
Finally, rules related to code coverage are specified. One rule is that only VC code must be instrumented for coverage. That code must contain the entire module and exclude all verification code, including the testbench, stimulus, drivers, monitors and models.
The specification has been released to VSIA members and will be available to the general public in the future. By following the best practices it describes, a VC provider can perform the best possible functional verification. By supplying the applicable deliverables and following the associated rules, the provider can greatly assist the integrator in properly using the VC and verifying the entire SoC.
Thomas L. Anderson chairs the Virtual Socket Interface Alliance's Functional Verification Development Working Group.