| It is a known fact that functional verification takes the lion's share of the design cycle. With so many new techniques available today to alleviate this problem, which techniques should we really use? The answer, it so happens, is not straightforward and is often confusing and costly! |
The tools and techniques to be used in a project have to be decided upon early in the design cycle to get the best value for these new verification methods. Companies often end up making costly mistakes by underestimating or sometimes overestimating the complexity of the design and skill set required to run these new tools and techniques.
The higher the abstraction level, the easier it is to design; by the same token, the higher the abstraction level, the easier it is to make a bigger mistake. Making an architectural flaw can end up hurting the entire chip, as opposed to a misconnection of wires at the gate level that can be fixed by a re-spin.
Verilog, for example, makes it possible to design at a fairly abstract level, but it is very easy to make mistakes if one does not know the nuances of the language. The same argument holds true with the many verification techniques and languages available today.
This article gives the reader an overview of the prevalent verification techniques (formal verification, random, directed, constrained random, assertions, property checking) and languages (SystemC, C/C++, SystemVerilog, OpenVera, e). It also examines the place for the various verification techniques and the time one should use them during a traditional digital ASIC design flow.
1.1.1 Design bottleneck
Time is very essential to a company's success. Historically, "product time" — the time it takes for a concept to become a production part — has been mainly a function of design time. Whenever a new technology or process is introduced, design time was and continues to be the primary bottleneck.
Design time is a function of silicon complexity. This gives rise to system complexity, which affects time to market, as shown in Figure 1.
Figure 1 — Technology cycle
Following an exponential increase in the number of transistors in designs, a linear increase in compute time or number of engineers was not adequate to reduce design time. To solve this problem, the EDA industry stepped in to introduce the concept of design abstraction through automation. Language-based solutions such as Verilog and VHDL were introduced.
This design catch-up game is still in place. The latest in the languages to be introduced and supported by EDA world are SystemC and SystemVerilog. For the current technology processes, design complexity is well understood and design bottleneck has been overcome to some extent, thanks to the productivity gains through the use of EDA tools.
Having solved the first round of problems, the focus now is on solving the effects of the first order problems such as the verification bottleneck.
1.1.2 Verification bottleneck
Design productivity growth continues to remain lower than complexity growth — but this time around, it is verification time, not design time, that poses the challenge. A recent statistic showed that 60-70% of the entire product cycle for a complex logic chip is dedicated to verification tasks. Verification of complex functions that we can build using new design tools poses a challenge to reducing the total product time.
Figure 2 — Design and verification gaps
The verification bottleneck is an effect of raising the design abstraction level for the following reasons:
1) Designing at a higher abstraction level allows us to build highly complex functions with ease. This increase in design complexity then results in almost doubling the verification effort. We have doubled the functional complexity and hence its verification scope. Other factors that affect the verification problem are:
2) Using a higher level of abstraction for design, transformation, and eventual mapping to the end product is not without information loss and misinterpretation. For instance, synthesis takes an HDL-level design and transforms it to the gate level. Verification is needed at this level to ensure that the transformation was indeed correct, and that design intent was not lost. Raising the level of abstraction also brings about the question of interpretation of the code that is used to describe the design during simulation.
1) Increase in functional complexity because of the heterogeneous nature of designs today; for example, co-existence of hardware and software, analog and digital. Statistics quoted in a Synopsys white paper show that the verification problem is real and is costing companies a lot of money. For example:
2) The requirement for higher system reliability forces verification tasks to ensure that a chip level function will perform satisfactorily in a system environment, especially when a chip level defect has a multiplicative effect.
What options do companies have for tackling verification bottlenecks?
- Chip flaws because of design errors. 82% of designs with re-spins resulting from logic and functional flows had design errors. This means that corner cases were not covered during the verification process and that bugs remained hidden in the design all the way through tapeout.
- Chip flaws because of specification errors. 47% of designs with re-spins resulting from logic and functional flaws had incorrect or incomplete specification. 32% of designs with re-spins resulting from logic and functional flaws had changes in specifications.
- Problems with reused IP and imported IP. 14% of all chips that failed had bugs in reused components or imported IP.
- Effects of a re-spin. Re-spins can cost a company up to $100,000. In addition, it delays product introduction and adds to expenses due to failing systems that use these defective chips.
To increase verification productivity, the EDA industry came up with a solution similar to what was used to solve the design bottleneck — the concept of abstraction. Higher-level languages such as Verilog and VHDL were introduced to verify chips; these included constructs such as tasks, threading (fork, join) and control structures such as "while." This provided more control to fully exercise the design on all functional corners. However, these constructs were not synthesizable and hence not used by designers as a part of actual design code.
- Reduce chip complexity. Practically, this is not possible because of customer demand for more functionality.
- Reduce number of designs. This solution affects a company's long-term goal of being profitable.
- Increase resources. Another alternative is to increase the number of designers or verification engineers. This alternative works well to some extent, but does not meet today's demands for verifying exponentially complex chips with a limited amount of time and money.
- Increase productivity of designers. This area is open for optimization. Productivity gains have been achieved by improving compute power, using personal tools such as Microsoft Excel. While they have been of great help in capturing test and verification plans, the majority of the time is spent in coding the test cases, running them, and debugging.
- Increase verification productivity. This has obvious potential for gains in productivity. The rest of this article concentrates on this last point.
As complexity continued to grow, new verification languages were created and introduced that could verify complex designs at various levels of abstraction. Along with new verification languages came technologies and tools that supported them.
So what does all this mean for the chip vendors? They have to evaluate new tools. Engineers have to be trained on these new tools and technologies. New tools and resources have to be included in the cost structure of R&D expenses. The company as a whole has to overcome a learning curve in a short time. Risk evaluation for these new tools needs to be performed. And integration and interoperability of new tools with existing technologies needs to be considered.
1.2 Verification versus validation
In addition to the verification problem, chip companies are grappling with validation time. This section describes how "validation" differs from "verification" and sets the stage for the subsequent section on various verification technologies.
Kropf  defines "validation" as the "process of gaining confidence in the specification by examining the behavior of the implementation." Recently there was discussion on the subject of "verification vs. validation" in the on-line Verification Guild. Many views were presented regarding the difference. One view was that "validation ensures it is the right design, while verification ensures that the design is right." Another view was "verification means pre-silicon testing (Verilog/VHDL simulations) while validation is post-silicon testing (testing silicon on boards in the lab)."
Whether it is validation or verification, two things need to happen to ensure that the silicon meets the specification:
For the purposes of this article we will consider the second step as verification, and the first step as validation. Figure 3 provides an overview of the prevalent design flow that is used in the industry to ensure that the above two steps are met.
- The chip specification is interpreted correctly (typically through documentation and sometimes modeling).
- The interpretation is captured and implemented correctly (typically through HDL) and synthesized into silicon and packaged as a chip.
Figure 3 — Typical design flow
Depending on the complexity of the function being implemented, some of these steps may be skipped or more steps added. For example, if we know that a certain design is purely hardware-oriented and does not involve drivers or software, one can directly jump to abstraction level 1 from abstraction level 3 (no need for hardware/software trade off). An example of this would be a PLL (Phase Locked Loop) design.
It is important to note that equivalence must always be maintained as we step down the levels of abstraction to ensure that the lowest level of abstraction meets the requirements of the system specifications. For example:
1) Equivalence between chip specification (typically a text document) and its C-model will be made when the C-model is put in a system environment and meets all the system requirements as described in the specification. Equivalence of this type is often said to be functional in nature.1.3 Current verification technologies and trends
2) Equivalence may be established between a C-model of a specification and its HDL implementation by comparing the outputs of the C-model (now reference or golden) and the HDL implementation for a given application. In the absence of a C-model, an "expected data model" (behavioral model that has passed the functional equivalence test as described in ) is used. Equivalence of this type is also considered functional.
3) The HDL (now reference or golden) implementation and the gate level description (after synthesis) are established to be equivalent by using a logic equivalence check. At this point the equivalence will be logical in nature because the design is in the form of bare logic gates, and functions can now be expressed as logical expressions.
Figure 4 shows a snapshot of the various methods and technologies that are available to companies today.
Figure 4 — Verification methodologies
1.3.1 Dynamic functional verification
The most widespread method of functional verification is dynamic in nature. The reason it is called "dynamic" is because input patterns/stimulus are generated and applied over a number of clock cycles to the design, and the corresponding result is collected and compared against a reference/golden model for conformance with the specification.
A simulator is used to compute all the values of all signals and compare the specified expected values with the calculated ones. Currently, industry has provided the choice of two types of simulators:
Random/directed functional verification
- Cycle based simulator. The simulator has no notion of what happens within a clock, and it evaluates signals in a single shot once per cycle. This type of simulator is typically faster because of the low execution time.
- Event based simulator. These simulators take events (within a clock or at the clock boundary) and propagate them through the design until a steady state is achieved.
A major drawback of dynamic simulation is that only the typical behaviors, and not all possible behaviors of a chip, can be verified in a time-bound simulation run. The main reason for this is that chips are tested for the known "test-space" using directed tests. Even testing for known test-space can take a long time. For example, to verify the test-space of a simple adder that adds two 32-bit operands will take 232x232 clocks!
When the logic gets more complex, the verification space increases. This brings about random dynamic simulation, which provides random stimulus to the design in an effort to maximize the functional space that can be covered. The problem with random testing is that for very large and complex designs, it can be an unbounded problem.
To solve this problem, the EDA industry introduced higher-level verification languages such as Open Vera, e, and SVL (SystemC Verification Library). These introduced concepts such as constrained-random stimulus, random stimulus distribution and reactive test benches.
In addition to the introduction of randomization features, new verification languages and tools increased productivity by decreasing the amount of time companies spent on building various test case scenarios for stimulus generation. For example, the test scenarios can be written at the highest level of abstraction and can be "extended" to any lower level of abstraction by using powerful object-oriented constructs.
When using dynamic verification, companies typically want an estimate of the functional space covered and captured in quantifiable terms. These include:
This can be achieved by using tools like code coverage and lint tools.
- Number lines of code that were verified (line coverage)
- How many of the logical expressions were tested (expression coverage)
- How many states in a FSM design were reached (FSM coverage)
- The number of ports and registers that were toggled both ways during a simulation run (toggle coverage)
- Number of logical paths in the design code that were covered (path coverage)
Designers use assertions as placeholders to describe assumptions and behavior (including temporal) associated with a design. Assertions get triggered during a dynamic simulation if the design meets or fails the specification or assumption. Assertions can also be used in a formal/static functional verification environment.
1.3.2 Hybrid functional verification
Typically in this method, dynamic simulation is performed and the results are captured and used as inputs for static verification. During static verification, logic equations/symbols are propagated through the design as opposed to values (as in dynamic simulation). This technique is less exhaustive than formal verification, but may prove more effective than pure dynamic simulation, as it starts off where dynamic simulation left off.
1.3.3 Static functional verification
In static functional verification there is no input stimulus applied to the design. Instead, the design is mapped on a graph structure that describes its function using BDDs (Binary Decision Diagrams) or other mathematical representations that specify the design function over all time periods. Proving or disproving the properties with it then verifies these mathematical representations. This is done by determining contradictions in the mathematical structure by passing values to and against the signal flow.
Current tools target the static verification market in two ways:
1.3.4 Equivalence verification
- Using assertions. These are design constraints that are specified and formulated in the model/design itself (using design/verification languages such SystemVerilog or Open Vera/Verilog/VHDL).
- Using Properties.These allow specification of the properties using a property language (such as PSL, Sugar).
In order to make sure that the gate level representation is the same as the HDL implementation, an "equivalence check" is performed by using matching points and comparing the logic between these points. A data structure is generated and compared for output value patterns for the same input pattern. If they are different, then the representations (in this case gate and RTL) are not equivalent. Equivalence checking is sometimes performed between two netlists (gate level) or two RTL implementations when one of the representations has gone through some type of transformation (Figure 3).
Some practical reasons for the design representations to be different are as follows:
- Synthesis algorithms/heuristics. Depending on the constraints on the synthesis tools (area, time, power), the synthesis tools will optimize the logic to derive appropriate gate level representation. In order to do so, the synthesis tools will use heuristics and logic minimization algorithms.
- Abstraction level. Sometimes the design may be implemented using HDL, which could be different from the designer's intent due to language limitations, or because of lack or inability to predict how synthesis tools interpret and transform a particular language construct into a gate level representation.
This section describes the trends and forces that are shaping the world of verification.
Design errors could be located at the interfaces of modules or inside the modules that eventually get integrated to form chips. Using dynamic simulation, bugs not found at the module level, for instance, can show up during the subsystem level or system level. Fixing a module level bug during integration could possibly introduce new bugs at that level, causing the turn-around time to increase further.
Using static functional verification, each step of module development is exhaustively verified, ensuring radically better subsystem/system quality and reliability. It has been claimed that using static/formal functional verification, we can find more bugs faster .
On the flip side, the drawbacks of using static functional verification are:
In addition to the above technical challenges, there is the added problem of competing standards for assertion based formal functional verification (for example, PSL vs. SystemVerilog). Currently, static functional verification is used to verify modules that are mature enough to be synthesized. In addition, dynamic verification (random and directed) is used to rigorously verify the module before integration.
- Current tools can be run at module level only.
- It typically cannot handle large designs (typically handles 100K-150K gates).
- It cannot handle highly complex designs.
- The current formal tools sometimes have a problem with being time unbound (this is mainly because they tend to be based on a set of heuristics and not on any single algorithm). This means that formal functional verification could, in some cases, show longer times for verification than dynamic simulation.
- The design has to be written in synthesizable RTL.
Static verification has done the same thing to functional simulation that static timing analysis (STA) did to dynamic timing analysis (gate level simulation). But, gate level simulation is not dead (see article, Similarly, dynamic simulation will continue to dominate the functional verification space until formal verification tools provide a method to converge on results and, in general, mature more.
A new trend points towards combining design and verification languages. This will bring about a tremendous boost in productivity, improve system reliability, and enhance design quality because the ambiguity and misunderstanding involved with the use of multiple tools and languages is now eliminated.
Until the time this trend is proven on real world designs, companies continue to rely on current high-level design languages (Verilog/VHDL) and use either proprietary verification languages (Open Vera, e) or good old-fashioned Verilog/VHDL.
SystemC and other high-level design description languages play an important role in design flow that involve hardware/software tradeoffs and designs that have software running on hardware, as in SoCs (Systems-on-Chip). SystemC and other high-level design languages continue to play an important role in architectural modeling and validation. SystemC is also used where the architectural model components can be used for verification using transactional models.
1.4 Criterion for choosing the right verification methodology
Engineers are grappling with extreme design complexities in an environment of decreasing time to market and tighter cost constraints. In these types of environments, it seems that filling in the holes in existing methodologies will be sufficient, and that spending time on new technologies can be postponed if the value proposition is high. It has been shown that companies that spend a higher percentage of R&D on new technologies make more efficient use of their R&D spending, enjoy faster time-to-market, grow faster and are more profitable.
Having said the above, it needs to be reiterated that companies have to evaluate the methodologies and technologies based on their individual needs and their core values. Before introducing new technologies into their tool flow, they should ask themselves the following questions and make appropriate tradeoffs.
The remaining of this section describes some perspectives that can explain simple trade-offs that companies can employ in making decisions regarding tools, languages, and methodologies. It is described from a product, system, and methodology perspective.
- Are we a market leader or follower in a certain product line?
- Are the CAD infrastructure, tools, and methodology centralized or distributed?
- Is the evaluation of new verification technology for a first generation product?
- What is the direct and indirect effect of new verification technologies on cost and time-to-market?
- Can the new tool handle varying design capacities?
- How easy is the tool to use?
- What is the level of documentation and support available for the tool?
- Will the new tools and methodology interoperate with existing in-house tools and methodologies?
- What will be the ROI (Return on Investment) for new tools (including service, training, consulting, compute, and manpower resources)?
- Can it support multiple geographical design locations?
- Do the designers and verification engineers we employ have a mindset for new technology?
Companies that are involved primarily in producing memory chips, or those that are memory intensive as opposed to logic intensive, may not require intensive logic verification at all. However, memory intensive designs offer other challenges such as routing, technology scaling, and power.
Companies that make pure ASIC chips that do not run any software on them might not have to perform the hardware/software tradeoffs or might not have to run tests that capture software running on hardware. For example, a SERDES (serial/de-serializer) chip will require a different type of verification and modeling methodology as compared to a SoC, which has both hardware and software.
For large corporations that have diversified product lines, the verification methodology has to encompass the varied requirements of the various products.
Historically, logic chip vendors were not responsible for verifying that the chip met the functional and performance requirements on a reference system. System vendors today are asking chip vendors to perform system reference verification before placing bulk orders; this requires modeling and verifying the chip from an additional level of abstraction than what chip companies were used to.
This calls for architectural models, reference models, writing complex application level tests, and elaborate test bench setups. For these types of challenges, companies can either spend a great deal of time and effort using existing languages, technologies and methodologies or embrace new technologies (such as Vera or Specman). In addition, chip vendors have to ensure that the reference models and verification suites used are done in an environment that is compatible with those of the customers.
Static functional verification in combination with dynamic simulation can be used to verify module level functionality to a high level of confidence before chip integration. Dynamic random and directed simulations can then be used at the system level. System-level verification can be effectively performed using dynamic simulations by making sure that we are continuously verifying major corners of the functional space.
It might be easier to first run random simulations in order to catch a lot of bugs at the initial stages of the verification, and then constrain the random simulation to make sure that the test space (as specified in the test plan) has been fully covered on the device. Constrained-driven verification should be considered to hone in on functional coverage metric.
The term "functional coverage" is used to describe a parameter that quantifies the functional space that has been covered, as opposed to code coverage that quantifies how much of the implemented design has been covered by a given test suites. Directed simulation can then be used to cover corner test space at the end of the verification cycle.
Assertions and properties can be made to work in the background during static functional verification (at the module level) and can be reused in a dynamic simulation environment (at both module and system level). They are also useful if the module is going to be turned into IP because the assertions will constantly check the IP's properties when it is reused.
With time, devices with smaller feature sizes (90nm and 65nm) will be in production. We are seeing design bottlenecks in products that use these technologies (as seen in Figure 1). Companies are trying to solve issues such as routing, cross-talk, and soft error rate this time around.
Once these design bottlenecks are overcome, and as chip vendors pack more logic on chips with smaller features, the next round of verification problem is foreseeable. New tools and methods are constantly being introduced to increase design productivity. It is necessary to raise the level of abstraction for design and for verification to contain the growing complexities. Convergence of design and verification language is now being seen through the introduction of SystemVerilog.
Validation continues to play an important role by increasing the product quality, and indirectly affecting the overall product time by enabling first-pass silicon. SystemC and modeling languages will continue to enable architecture validation of logic intensive products and ensure a "correct by design" methodology.
Productivity is being further raised through the use of properties, assertions, and through the introduction of formal verification tools. Do we see dynamic simulation going away in the near future? No. Engineers still rely on and trust dynamic simulation because of its ability to verify large and highly complex designs. In the long run, we could see a trend where formal verification might take the front seat while dynamic simulations will be run for sanity checks.
In this article we presented some trade-offs that companies could make in order to adopt best-in-class tools and technologies to achieve long-term success. New technologies might mean high short-terms costs and expenses, but could pay off in the long run by building confidence in the products that the companies build. These decisions could be easy for some companies and a hard choice for others. Unfortunately, a tool, language, or technology that meets every company's verification and design requirements is still a dream.
 SNUG (Synopsys Users Group) paper: "The Next Level of Abstraction: Evolution in the Life of an ASIC Design Engineer"
 Roloff, M./Chan, A./Cote, C./Srinivasan R.: Growth through Product Development New Research Insights. PRTM Insight, 2000.
 Kropf, T.: Formal hardware verification: Methods and systems in comparison. Springer, 1997
Rangarajan (Sri) Purisai is a senior logic design engineer at Cypress Semiconductor's Network Processing Solutions Group. Currently, he is working on the architecture and design of high performance network search engines and co-processors.