5.3 Evolutionary migration
When considering the ideal characteristics of a unified methodology it is critically important to recognize the realities of migrating to such a methodology. Migration must be evolutionary, not revolutionary. To the extent that existing resources are optimized, they are optimized for an existing fragmented flow and for specific stages of verification, as is the existing infrastructure. Verification engineers are certain to lack some skills necessary for the new methodology. Perhaps most important is the existing intellectual property (IP) in all of its various forms, specifically models and legacy tests.
A unified verification platform must support all third-party models and other verification IP written in standard languages. Native standard language support ensures high performance. Also important, it should be easy to incrementally update existing IP to take advantage of key unified methodology requirements, in particular transaction-level recording and assertions. Much existing IP is written to operate on transactions and/or to flag protocol errors.
The platform should make it very easy to incrementally update the IP to record this information in its standard transaction and assertion databases. Similarly, it should make it easy to incrementally add reusable verification IP compatible with the unified methodology. On the test side, the platform must also provide support for any existing in-house or third-party test generation tools and related tests. This should be accommodated via production-proven links via standard APIs wherever possible.
Of course a unified platform should enable third-party and in-house tool integration. In large part this is accommodated via standard languages and APIs. However, the platform should provide third-party and in-house tools the ability to interact with any databases and via the standard user interface. Also, having a powerful ability to program coding style checks ca n help engineers migrate to a new methodology, as well as enforcing good design and verification coding policy.
Functional verification of complex ICs is in a disastrous state. Exponential increases in digital logic and embedded software, along with critical mixed-signal circuitry will only exacerbate today's issues. While verification teams and EDA companies have focused on the symptoms --performance, capacity, languages, coverage, advanced techniques, hardware-software co-verification -- fragmentation is the root cause. By simply mirroring today's fragmented design methodologies, verification teams have wound up with a slow, inefficient overall process. It is time to fundamentally rethink this approach.
The unified methodology described in this paper eliminates the root cause. Utilizing a transaction-level functional virtual prototype, verification teams create a high-speed executable model useful for embedded software development and initial design-in many months befo re the implementation is complete. The FVP provides reference models for block-level testing, serves as a top-level verification environment for each block, and becomes the integration vehicle for the final implementation, even during hardware acceleration. Verification engineers also use the FVP to define application-level and transaction-level coverage, and use its interface monitors to track progress throughout the process. When ready, the team migrates the design into emulation or prototyping for near real-time application-level verification in a realistic environment.
This unified methodology requires a unified verification platform. For efficiency, the platform must be based on a heterogeneous, single-kernel architecture that natively supports Verilog, VHDL, SystemC, Verilog-AMS, VHDL-AMS, PSL/Sugar, and all other viable standard languages. Otherwise, the methodology will remain fragmented. The platform also should support transaction-level abstraction, hardware-based acceleration, and high-speed t est generation for maximum performance.
A unified verification platform can provide the incredible speed and efficiency needed to verify nanometer-scale ICs. It's about time.
APPENDIX: CRITICAL FUNCTIONAL VERIFICATION ISSUES
The table below lists some of the most critical verification issues cited by SoC teams.
Performance and capacity
Performance and capacity are perennial issues dating back to gate-level simulation. Despite substantial improvements in simulation engine performance and exponential increases in workstation performance, software-based verification is slowly losing ground to the growing complexity of IC designs -- especially with increasing time-to-market pressures. Many nanometer ICs will require hardware-assisted verification in order to be verified thoroughly enough in a timeframe of relevance to decide it's time for a greater than $1M tapeout.
Language selection is becoming a critical issue. Life was simple wh en design and verification both used Verilog or VHDL. With nanometer ICs, there are new and newly extended languages for system design, test generation, assertions and modeling, analog/mixed-signal design, and digital design. Language decisions are so critical because they typically have very long-term effects.
Once system, verification, or design IP is created in a given language, it becomes increasingly difficult to switch directions. Proprietary languages, including those controlled by a single company, lock users into a particular vendor and its limitations. Since proprietary languages rarely garner ubiquitous support, it also often means incompatibilities with partners and customers. If universally supported standard languages are available, they are clearly the best alternative.
Testbench development, coverage, models, verification IP, and reuse
The next four issues -- testbench development, coverage, models and verification IP, and reuse --all have to do with efficiently creati ng effective test environments. Verification teams want to be able to create directed, directed-random, and random tests as easily as possible. Doing so requires having a rich set of models and verification IP as a starting point. When one has to create a new model or verification IP, they ideally want to do so in a manner such that the can reuse it throughout the project, and ideally on other projects. Lastly, since verification is an exercise in risk management, teams have a keen interest in coverage capabilities that identify the need for specific additional tests, as well as indicate the comprehensiveness of the verification to date.
Advanced verification techniques
Advanced verification techniques includes the use of increased abstraction, assertions, formal, and semi-formal techniques to verify designs more completely and in less time. Working at a higher level of abstraction requires much less specification and the resultant models run much faster because they carry much less detail . Assertions enable designers and verification engineers to specify design intent in a form that increases observability for error detection and coverage analysis, as well as serve as inputs for formal and semi-formal verification techniques.
Formal techniques include modeling checking and static formal analysis that ensure various assertions hold true throughout the entire state-space. Again the practicality of this has been limited to relatively small units within designs. Semi-formal verification attempts to relax these limitations by using formal techniques to expand the state-space explored through select simulations.
Hardware-software and analog/mixed-signal verification
The last two issues listed are a function of design convergence at the nanometer scale. With embedded software engineering costs surpassing the hardware development costs for nanometer ICs, it is not surprising that hardware/software co-verification is becoming increasingly important. Likewise with analog invadi ng what would otherwise have been pure digital ICs. With today's high-speed communications, verifying the complex interactions between digital and analog interfaces is critical to ensuring first silicon success.
Lavi Lev is executive VP and general manager, IC Solutions, Cadence Design Systems. Prior to Cadence, Lev was senior VP of engineering at MIPS Technologies. He has also led engineering teams at Silicon Graphics, MicroUnity Systems, Sun Microsystems, Intel Corp., and National Semiconductor.
Rahul Razdan is corporate VP and general manager, Systems Verification, Cadence Design Systems. Prior to joining Cadence, he held a variety of positions at Digital Equipment Copr., and was involved in building Alpha microprocessors for key workstation and server products. Previously at IBM, he focused on automatic test pattern generation and functional verification.
Christopher Tice is senior VP and general manager, Verification Acceleration, Cadence Design Systems. He was formerly VP of worldwide su pport for Quickturn, which he joined in 1993. Previously, Tice was director of application services for PiE Design Systems.