By Chris Wilson Nusym Technology, Inc. June 02, 2008 -- edadesignline.comEvolving Verification Environments
It was standard practice in the early days of hardware design to design a chip and then verify it. However this methodology started to break down in the 1980s when rising chip density and Non-Recurring Engineering (NRE) costs made it too expensive to fix bugs after a chip had been fabricated. In the late 1980s, a shift took place towards pre-silicon verification, where verification was performed prior to tape-out and in parallel with design. This shift was enabled by the advent of high-level Hardware Description Languages (HDLs) and simulators that allowed engineers to test the design before tape-out.
The switch to pre-silicon verification introduced a number of problems. The first of these was the need for separate design and verification engineering specializations. In traditional methodologies, the designer performed most of the verification. However, this doesn't work in pre-silicon verification because of the need to design and verify in parallel. The second issue was the fact that verification must commence before the design is complete, and often before the design is fully specified. This means that the design is a moving target with bugs sometimes being introduced as fast as they are fixed, resulting in more effort required to produce the same quality of results. As a byproduct of increasing verification team size, the third problem is that the verification engineers do not know the design as intimately as the designers, resulting in lower productivity and quality. Because of these problems, it became apparent that pre-silicon verification would require greater effort to achieve equivalent quality levels to the serial verification methodology. As a result, a much higher percentage of overall project effort was devoted to pre-silicon verification, often the source of tape-out bottlenecks. This problem persists today, where it is claimed that pre-silicon verification may consume up to 70% of total project effort.
A number of solutions emerged to solve the resulting productivity and quality challenges of pre-silicon verification. The most important of these were fast compiled-code simulators, such as Chronologic's (now Synopsys) VCS, and debugging environments, such as Design Acceleration's (now Cadence) SignalScan. Simultaneous to the movement towards pre-silicon verification was the emergence of practical formal verification techniques. The use of HDL-based design entry and synthesis had created a new problem, a difference in abstraction between what was designed and the format used for tape-out. Formal verification-based Equivalence Checking effectively eased the verification problem by allowing an exact comparison between what the designer entered (Register Transfer Level or RTL) and what was taped-out (gates). At the time, formal verification also promised to revolutionize the verification of design intent by finding all bugs in the design. However, this capability has not materialized in practice, principally due to the gap between actual design sizes and the capacity limitations of formal verification. This gap has continued to grow over time.
With the realization that HDL-based hardware design looks a lot like software design, it made sense to leverage techniques used in designing software to help with the hardware design process. A number of software-inspired tools and techniques were introduced starting in the early 1990s. Static analysis tools, such as InterHDL's (now Synopsys) Verilint, were introduced to check for coding errors. Code coverage tools gave an indication of the amount of the design being exercised. However, the most important of these new methods were specialized verification languages and associated test generation technologies, specifically the 'e' language from Verisity and the Vera language from System Science.
Almost immediately after the advent of HDL-based design, companies started building verification environments, principally using C software language functions linked into the simulator. The code base that developed became very sophisticated over time, with rich libraries of functionality that could be re-used from project to project. It made sense to incorporate commonly used functionality into specialized languages to make the job of building complex environments easier. Because large verification environments were, in themselves, extensive software engineering projects, these languages also included high-level programming features to allow modern software engineering practices, such as Object-Oriented Programming (OOP), to be used in building verification environments.
Probably the most important feature of the new verification languages was the use of static, declarative constraints to specify the legal stimulus set for a design. For example, one can create a class describing the contents of a communications protocol packet and use static constraints on its fields to define legal versus illegal packets. Declarative constraints significantly reduce coding effort and increase readability, contributing to reduced bugs and increased productivity in creating the environment. Another advantage of using declarative constraints is that the test generation constraint solver is able to ensure that randomized values are uniformly spread over the entire legal input space. This is usually difficult or impossible to achieve using imperative code only, resulting in specific space segments being explored over and over again. Static constraints, combined with constraint solving, find more bugs with less effort due to the ability to evenly search the input space.
When designers were doing their own verification, they primarily relied on directed tests and only performed random testing as an afterthought, and then only if there was time. This approach works only if the person doing the verification has intimate knowledge of the design. Verification engineers typically write tests based only on the specification. They rely on random testing to find things that are not apparent from the specification. The advent of constraint-based testing made it easier to start random testing from the very beginning of the verification process. Thus, a switch to a primarily random-based methodology occurred, with directed testing used to "fill in" missing gaps. Today, constraint-based random testing is considered the most effective pre-silicon verification methodology available.
Click here to read more ...