| I once overheard a verification manager say, "What did I do to deserve this living hell?" Well, that is the point. It's often what we do or don't do in terms of a methodology that ultimately pulverizes our ivory tower view of the perfect process and converts it into a towering inferno. |
Now, before I lead you to verification salvation with my words of wisdom, I want to first encourage you to attend this year's Design Automation Conference, where a team of verification experts (or in some cases, evangelists) will debate this very issue in a panel Wednesday, June 15 entitled "Is methodology the highway out of verification hell?"
If you have ever attended a DAC panel session, you have probably been frustrated by that one panelist who seems to act just like the Energizer Bunny (that is, keeps going, and going, and going). Hence, to ensure I have sufficient time to get my point across, I've decided to take the proactive measure of writing this article — touching on a few important points that I might lack sufficient time to discuss during my presentation.
To begin, let's examine how the constraints on a design project have changed over the past ten years. Until recently, the duration of many design projects spanned many years (for example, processor chips and high-end ASICs). For these high-performance systems, the requirements specification generally remained static throughout the project, with only minor changes or corrections occurring.
What's even more intriguing is recognizing how yesterday's best practices (optimized for high-end, high-performance design) have actually influenced the way we do design in general today. In fact, if you examine many of today's processes, tools, and methodologies, you will learn that they were initially developed to support leading edge designs, often by internal CAD teams.
Fast forward ten years, and we see a very different world. Today, rapidly changing market demands, combined with fierce competition, are placing unprecedented constraints on the design process. In fact, where the process of specification was fairly static in the past, today the process has become extremely dynamic.
I've seen many cases where, to be competitive, a last minute decision is made to add a new feature, only days before tapeout. There are also cases where partially implemented features must be surgically removed from the design in order to meet the tapeout date. Hence, the ability of engineering teams to quickly validate changing requirements and feature sets, often in the middle of the development process, while still hitting tight development schedules with high quality, is key to success in today's world.
Now, this is partially where the problem arises. As I mentioned, many of today's methodologies were designed to support yesterday's high-performance design constraints. Keep in mind that a three to six month design cycle was not a constraint for previous generation designs. And unfortunately, many of the methodologies in use today are not optimized for short design cycles with rapidly changing requirements.
Yesterday's best practices
I can always quickly assess the verification maturity or sophistication of an engineering team by examining the processes and methodologies they have put in place. For example, teams I place lowest on the totem pole of my capability maturity model predominately depend on Verilog and VHDL testbenches to generate directed tests (note that these methodologies were developed decades ago).
Moving up the totem pole, you will find organizations adopting random stimulus generation, possibly combined with simple coverage techniques. Continuing up the totem pole you will find assertion and coverage-driven verification (CDV) use, generally leveraging sophisticated functional coverage models.
Note that many of these so-called more advanced techniques have been around since the mid-to-late 90's, and they were common practice with high-performance design teams that I worked on during this period. Yet, even with the best of these approaches, there is still an inherent flaw that each of these progressively advancing methodologies is attempting to patch, that is, attempting to validate all possible behaviors of a design — or what can be called achieving 100% actual coverage — using dynamic simulation.
I would argue that industry statistics on design success rates continue to shed light on the inadequacies and limitations of traditional verification methodologies. If truth be told, in many other disciplines (such as manufacturing) similar success or failure rates would be unacceptable.
Now don't get me wrong. Simulation has been for decades, and will continue to be, an important component of functional verification. Still, it is important to recognize the inherent limitation of simulation. In fact, there is a myth common among many engineers that the process of simulation scales. The reality is — this notion is false.
Note that many of today's design blocks are approaching the same size and complexity of chips years ago for which the methodologies were originally developed. To dig deeper into this myth, let's examine the inherent limitations of simulation, which can be broken down into three components: verification starts late, stimulus generation, and simulation runtime.
Verification starts late. In traditional simulation methodologies, design concurrency and complexity are typically explored late in a project's design cycle when the full chip is integrated into a system simulation environment. As you have perhaps experienced, at this late design stage there are typically massive design interdependencies in the fully developed RTL (see Figure 1). And these interdependencies are a train wreck waiting to happen, because any late-stage bug can have significant collateral damage across multiple RTL files.
Figure 1 — Simulation explores complexity and concurrency late in the flow
Stimulus generation. Let's take a closer look at the problem of stimulus generation. Generating good simulation stimulus is analogous to generating quality stuck-at-fault vectors for manufacturing test. In fact, if today's designs did not contain scan chains, it would be a hopelessly intractable problem to generate and verify a set of sequential patterns that are targeted at activating and then propagating internal faults to an observable point.
The same problem exists today in simulation. That is, generating sequential input stimulus that can activate a deeply nested block with high coverage from the input of a large chip is an intractable problem. So as you can well imagine, with an increase in design size comes a decrease in simulation coverage quality.
Note that we use coverage techniques (ranging from low-level code coverage to high-level functional coverage) to measure the quality of our input stimulus, and attempt to find holes or missing input stimulus. As I previously stated, organizations that are typically higher on the capability maturity model totem pole generally invest the time and resources required to manually create a functional coverage model to measure the quality of their simulation runs.
However, it is important to understand that the process of creating a functional coverage model is not foolproof. In fact, there is a similarity between the problem of directed test generation and functional coverage model creation. Both are manual processes. Just as it is not possible to manually enumerate all possible scenarios required to create directed test, it is also generally not possible to manually identify all possible functional coverage points in a design.
Simulation runtime. Another problem with attempting to scale simulation approaches is the inherent limitations of simulation algorithms. To illustrate this point, consider a simple 32-bit comparator. To achieve 100% actual coverage when verifying this circuit using a fast simulator (that is, one vector per micro-second) would take 584,941 years on a fast simulator. Note that this is a simple combinational datapath circuit. Imagine trying to scale simulation to address your multimillion-gate design!
Today's best practices
So, what's the alternative, you ask? Recall that our goal is to create new methodologies that enable the engineering team to quickly validate changing requirements and feature sets, often in the middle of the development process, while still hitting tight development schedules with high quality. Hence, the alternative is to overcome the inherent limitations of yesterday's methodologies.
The methodology I'm proposing moves away from the traditional verify-after-the-fact approach to design to one of prove-as-you-design. This approach enables the engineer to design quality in from the start by formally proving fundamental high-level requirements (derived from the micro-architecture specification) during the development process.
That is, the verification engineer (or designer) incrementally proves that each aspect of the block functionality works as intended in parallel with the RTL code as it evolves. In addition, the methodology I am proposing allows the engineer to quickly validate changing requirements and additional feature sets added late in the design cycle in a non-disruptive manner.
Now, I will be the first to admit that merely attempting to prove an ad-hoc set of assertions contained within a block is insufficient for achieving the breakthrough needed to address today's verification challenges, since an ad-hoc set of implementation assertions does not specify the intended behavior from a micro-architectural level. On the other hand, the key to the methodology I am proposing is to prove high-level requirements, which are equivalent to output checks found in a block-level simulation testbench. Only then can we give up the need for block-level simulation.
One key advantage to the methodology I am proposing is that engineers can find bugs while the design is still fluid and the cost of bug fixes — as well as the risk of collateral damage associated with those bug fixes — is still low. In fact, what I've observed is that engineers can instantly explore massive design concurrency and complexity related to various high-level requirements of the design using functional formal verification (see Figure 2).
Figure 2 — Formal verification explores massive complexity early in the flow
Once formally proved, clean blocks are then integrated into the system-simulation environment. The result is a savings in both simulation time and system-level debugging time, which adds predictability to the process.
Let's compare the formal verification-based methodology with a simulation-based approach. The formal verification benefits include: verification starts early, no stimulus generation, and quick exhaustive verification.
Verification starts early. Unlike simulation-based approaches, which explore complexity and concurrency late in the design flow, formal verification can be used to explore complexity and concurrency at any point in the design phase, including a design phase before simulation testbenches are developed.
No stimulus generation. Formal verification uses mathematical techniques to exhaustively prove that an RTL model satisfies its high-level requirements. Unlike simulation, formal verification does not depend on testbenches, simulation vectors, or any form of input stimulus to verify the design.
This independence from traditional simulation-based methodologies comes about because the formal verification tool automatically compiles the RTL model and the high-level requirements into a mathematical representation, and then algorithmically and exhaustively proves that the implementation is valid with respect to its specification for all possible input sequences. Note that when a high-level requirement proves true using formal verification, there is no legal set of input vectors for which the design could fail. However, if the high-level requirement proves false, then the formal verification tool, unlike simulation, isolates the root cause of the bug, which dramatically improves a project's design productivity.
Quick exhaustive verification. In general, I have found most blocks can be exhaustively verified (that is, 100% actual coverage) in a range of a few days to a few weeks, depending on the block. In fact, it usually takes more time to create a block-level simulation testbench used to check a set of high-level requirements than the time it takes to complete the entire process of formally verifying the same set of high-level requirements. And I am not even talking about the additional time required to simulate the testbench!
ABV levels of maturity
Okay, so having been nicknamed Dr. ABV, and to be fully buzzword compliant, I'm sure you would be disappointed if I didn't include some demystifying text on the role of assertions in our methodology discussion. In Figure 3, I illustrate what I refer to as levels of maturity within an ABV Capability Maturity Model.
I believe that engineering teams are generally interested in improving their verification methodologies. Yet, unlike levels of maturity within simulation-based methodologies that I previously discussed, the industry is lacking a clear understanding or direction on how to effectively incorporate ABV with the verification flow. Hence, I believe that the industry needs a comprehensive framework for evaluating ABV best practices, thus providing a way to measure and improve performance in the application of assertions to achieve 100% actual coverage.
Figure 3 — ABV levels of maturity
Note that like other Capability Maturity Models applied to other domains of study, I have found that organizations that place on the lower levels of maturity typically have more difficulty achieving predictability in terms of high quality and schedule.
Let's dig a little deeper into the ABV levels of maturity. At level 0 (not performed), the organization currently does no form of assertions, not even $display() common statements in Verilog. The impact is that they generally experience longer debugging time in simulation.
At level 1 (ad-hoc), you will generally find an organization where a few key members will include some form of assertions in their RTL code. I label this level ad-hoc since there is no project-wide coordinated effort for specifying how to code assertions or use them within the verification methodologies.
At level 2 (planned), you will find organizations that have taken a coordinated effort to include ABV as part of their verification process. This approach includes selecting a single assertion language, defining assertion coding and reporting guidelines, and then deciding to use assertions throughout the simulation process. Note that there have been numerous studies supporting the benefit of this approach in reducing simulation debugging time (some studies indicate up to 50%). This benefit is due to improved visibility within the simulation model.
At level 3 (bug-hunting), you will find organizations attempting to overcome one of the inherent limitations of simulation-based methodologies (that is, stimulus generation) by applying formal techniques on the set of embedded assertions contained within the design. Note that I label this as bug-hunting for two reasons: the formal approaches applied here are generally incomplete, and the set of embedded assertions is incomplete and does not represent the design intent from a micro-architectural perspective.
At level 4 (targeted proofs), the organization has identified a few high-level requirements (that is, very high-level assertions that describe micro-architectural intent and not implementation details) and has made a conscious decision that achieving 100% actual coverage on this set of high-level requirements through formal proofs is critical to the success of the project. Note that at this stage it is not necessarily the goal of the organization to eliminate block-level simulation.
At level 5 (fully integrated), the organization has made a conscious decision to overcome the inherent limitations of simulation, thus providing predictability in terms of schedule and quality within the project. This is achieved by replacing block-level simulation with formal verification. Note, in order to take this step, it is necessary to formally prove the same set of checks (or output monitors) that would normally be coded in the block-level test-bench. Again, I refer to this type of check as a high-level requirement versus a lower-level embedded implementation assertion.
Of course, the topic of ABV levels of maturity would require a huge discussion in itself — and since I'm starting to resemble the Energizer Bunny, I think I'll defer any additional discussion until sometime in the future. Who knows, perhaps I'll even touch on additional details during this year's "Verification Hell" DAC panel.
Hope to see you at this year's DAC!
Harry Foster serves as chairman of the Accellera Formal Verification Technical Committee. He is author of the books Assertion-Based Design, Principles of Verifiable RTL Design, and Advanced Formal Verification. He researched and developed verification tools and methodologies for over 12 years as a senior member of the CAD technical staff at Hewlett-Packard, and is the original creator of the Accellera Open Verification Library (OVL) assertion monitor standard.