by Andrea Fedeli, Matteo Moriotti, Umberto Rossi, Franco Toto from STMicroelectronics CR&D - Formal Verification Group
via C. Olivetti 2, 20041 Agrate (MI), Italy
In this paper we discuss different aspects of assertion based verification in day to day applications, for both the IP validation and the system integration. ABV, originally conceived in the simulation context, can be significantly enhanced by the formal support, especially when the focus is put at the IP level of abstraction. A few real life cases are reported together with a short status about the commercial environment.
The level of complexity reached by modern integrated circuits would not be affordable without a solid methodology of both design and verification of single design blocks; as time-to-market has become a key factor of a product success, reuse is something one has to take into account, in order to meet delivery time.
An Intellectual Property is often meant to be just a RTL block; what we want to stress in the following is that the reuse of blocks in system design is not enough: one should be able to reuse both the block and its specification, in order to be able to check for possible incompatibilities, before the integration phase has reached an advanced stage. This framework could eventually lead to a new definition of Intellectual Property that includes both the specification of a circuit and its actual implementation, i.e. the RTL code. This process would greatly benefit of specification being expressed with a non HDL language (e.g. a property specifi- cation language), in order to have the possibility to automatically check for specs inconsistency and completeness, and even to verify compliance of the RTL code to the formal specs.
In section 2 we will describe the requirements for a safe reuse of single blocks, highlighting some possible hurdles to overcome; in section 3 issues and benefits related to the verification environment being reused are discussed; in section 4 a real world example is reported; section 5 reports some considerations on available tools; section 6 concludes the paper.
2. Reusing RTL blocks
Reusing existing blocks for building up new systems has become a common approach when designing new integrated circuits; of course, this practice is affected by a number of challenges: the main issue is that blocks are first designed with a reference system in mind, meaning that moving them into a different system (figure 1) can make them work in an unexpected environment, hence exposing unwanted (often incorrect) behaviors.
In order to avoid such problems, or find them out in earliest integration stages, some guidelines should be followed:
Verification — the block should be guaranteed to be error free: this means to perform the verification for the block stand-alone, before any integration will take place. The reason for that is to avoid misunderstanding if system specification is going to change; hence it is important to verify a block versus its specification, regardless the actual system. Formal verification (namely model checking) plays a key role in reaching this target, provided the specification is clear and complete enough to allow the definition of the whole set of formal properties needed to guarantee the adherence to the specs.
Robustness —the block should never reach an unexpected state (which is impossible to recover from), even when driven in an unproper way by surrounding blocks.
Documentation — the documentation should clearly state what the environment the block is supposed to work in is: indeed the designer of a block is often biased on the actual application that will use that block, more than being driven by the generic specification; as a result, some portions of the specification could disappear in the block’s first version, if related to scenarios inapplicable for the actual system. In order to achieve reuse it is necessary to include the environment description in the specification of the system.
Figure 1. Reusing a block in two different systems.
As an example, in our experience, we have seen a block that was developed for a specific system, hence assuming it will act in an environment exercising only a subset of a standard protocol; when applying formal verification to the block in order to verify the protocol compliance, we discovered a number of unsupported features: after notifying this to the design team, it came out that this information was fundamental for the time budgeting of their next project, as they were expecting the block (bought from a third party) to be fully compliant with the protocol.
3. Reusing the verification environment
As stated before, correctness is a major requirement for block reusability; formal verification, in particular, has a main role in guaranteeing correctness and robustness of a block.
Nonetheless, at the integration stage, problems could still arise due to misinterpretation of the block specifications by either the system integrator or the verification engineer. In both cases, verifying that the constraints assumed during the block verification actually hold on the system (or at least on the surrounding blocks) can help in identifying misunderstandings.
3.1. Static and dynamic verification
Although conceived for formal verification, all exposed considerations are obviously valid for verification in general; of course, in a perfect world, it should be very easy to switch from static to dynamic verification and vice-versa; a common frame is represented by Assertion Based Verification. Basically, this means to express desired behaviors of the block under verification by means of a property specifi- cation language, that could be understood by both static and dynamic tools.
Thanks to ABV, once the set of desired properties have been defined and coded, it is possible to get the best from each tool: formal and semi-formal tools are generally able to mathematically prove (or falsify) most of the properties; properties that exceed the complexity affordable by static tools can be addressed by dynamic verification (e.g. simulation, emulation, etc.), taking advantage of the bigger capacity of such technique.
What is still missing in the arena, is the ability to write the testbench once for both the technologies (static and dynamic): indeed, today the environment (i.e. how the inputs to the block are driven) for formal analysis is generally expressed in terms of assumptions (i.e. properties describing the desired behaviors of inputs, more than expressing the expected behaviors of outputs, see figure 2), while in the dynamic world the testbench is still described as a procedural sequence of assignment to inputs, even if recent introduction of testbench automation tools have changed a bit the way these sequences have to be written.
Figure 2. Environment for the static verification: properties check the correctness of the design under verification, while assumptions drive the inputs.
Figure 3. Environment for the dynamic verification: the testbench is driving the input, while assumptions and properties are checking the behavior of both testbench and design under verification.
However, a sort of mixed approach is possible: one can write the testbench as usual and, at the same time, instantiate the assumptions used to constraint the inputs in the static environment (figure 3); those assumptions will be fired whenever the testbench make them violated, highlighting an inconsistency between the environment description for the dynamic world (namely, the testbench) and the environment description for the static tool (i.e. the set of assumptions). This may be due to three main reasons:
- the testbench is wrong, i.e. it allows behaviors not allowed by specification;
- the set of assumptions is wrong, i.e. it is overrestricting the behavior of design’s inputs;
- the design under verification has a bug, that has been highlighted thanks to some feedback loop contained in the testbench.
3.2. ABV: the need for a standard
The existence of a standard language for expressing properties is obviously a need for tool interoperability: it would be great to write a set (library) of properties once, and run them with whatever tool (either static or dynamic) from whatever vendor. This was exactly the mission of one of the Accellera’s committee, that eventually produced the PSL, meant to become a standard.
Unfortunately, this standard is quite young, hence suffering of four main issues that could prevent product divisions from heavily investing on it:
Stability — The standard is still evolving (possibly merging with a similar standard, namely SystemVerilog Assertions, partially derived from Open Vera Assertions).
Subsets — Today we are not aware of two different tools supporting the same PSL subset! Far from being a detail, this aspect has a main impact on reusability of properties through different tools.
Flavors — PSL comes in several flavors, one for each of the hardware description languages SystemVerilog, Verilog, VHDL, plus a generic one; the syntax of each flavor conforms to the syntax of the corresponding HDL in a number of specific areas. On one hand, this allows the user to adopt the syntax he/she is most familiar with; on the other hand, as mixing different flavors is not allowed, PSL code reuse is further limited.
Recognition — Even worse, some EDA vendors still refuse to recognize PSL as a standard, forcing the user to adopt proprietary (or less supported) languages.
From the user’s perspective, all these factors heavily affect the ability of ABV to gain popularity in the designers community.
3.3. The assume/guarantee framework
Formal tools can use assertions as constraints for driving inputs of the block under verification. Since those assertions are coded using the same language adopted for properties, this allows to reuse both constraints and properties in the verification of surrounding blocks[6, 7, 2].
Let’s clarify this concept by means of an example: suppose the system is made of 2 blocks, A and B (figure 4); while verifying block A we will want to verify a set of properties P using a set of constraints C; after this activity has concluded, we should want to verify that all constraints used for verifying block A actually holds on block B: to do that, we will simply invert the roles of P and C, i.e. we will verify properties C holds on block B, assuming constraints P.
This way of performing verification is known as assume/ guarantee framework and has some major advantages, giving the possibility to decompose a complex problem in several smaller ones.
Figure 4. Assume guarantee example
From the block reusing perspective, the assume/ guarantee reasoning is particularly important: indeed it allows the system integrator to safely reuse a block, provided to conditions are met:
- the block to be reused has been formally verified under a set of assumptions;
- all the assumptions hold on surrounding blocks of the targeted system.
3.4. From block to system verification
The only limitation to the previous methodology is due to the limited capacity of formal tools, in terms of both complexity and size of the design. This leads to unpractical or unpredictable execution time. An approximation to the problem of the verification is provided by dynamic verification, at least for those properties that are out of the tractable size for formal technology.
Practically, what can be done is to instantiate all unproven properties directly in the system, together with all assumptions under which formal methods have been applied, in order to raise the confidence that the system is not going to violate them, as expected, hence driving the reused block in a proper way. In this case, as simulation is not exhaustive, some coverage metrics must be used, in order to estimate good the verification result is.
4. A practical example
In our experience, when top level verification only is performed, there is some possibility of having bugs going to silicon, or discovered in late stages of the design/ manufacturing process. This may happen for two main reasons:
- the bug is only exposed by specific input sequences, that can never happen in the actual system (i.e. the bug is an error with respect to specification, but is filtered out by surrounding blocks in the system);
- the bug is a deep corner case, hence escaped any tests.
In the former case, the bug is not an issue for the current design, but will prevent block reuse; in the latter case, a real bug is missed and will be eventually implemented on silicon. In both cases, the block level verification would have prevent this problem.
We recently addressed a problem of the second kind, when a product division asked our help as one of their SoC was experimenting a hang condition they where not able to reproduce in their top level simulation environment. They suspected a particular block (a bus switch) to be the responsible for the hang, but still they where not able to say what could be wrong. So they gave us the source code of the bus switch for further verification.
Figure 5 shows the simple property we implemented to verify a basic functionality of the block, i.e. registers read/write (hence including address decoding and requests arbitration). The property is very simple: it just defines a couple of signals decoding the write and read commands to a specific register; then it requires that after writing a generic datum d in the register, it will not be possible to read a datum other than d, before a new write has been issued.
The property resulted to be false, as the block was decoding the addresses and the read/write enables, regardless the request signal. This behavior had never be highlighted by top level simulation before as, due to surrounding blocks, the probability to have the request signal low, the write enable high, and a valid value on the address bus was very low. Nonetheless, knowing this bug of the single block helped the architects to replicate the problem in the high level environment, showing definitely that that bug was responsible for the system hang.
Figure 5. Property verifying consistency of a register datum.
What must be noted is that writing such a simple property and running formal methods allowed us to discover the bug in few minutes; if this would have been performed at an earlier stage, the bug would have never reached silicon, and confidence on the correctness of the block would have been greater.
5. Tools availability
The ABV methodology is general enough to be applied with different languages and tools, depending on the maturity of the design stage and the effectiveness of available tools. We may distinguish two different kinds of solutions:
1. point tools based;
2. integrated tool based.
In the first case, you may just consider the best tools for different contexts, e.g. block validation or system validation, and the different techniques, e.g. dynamic or static. By providing an ABV frame for each IP, as described in section 3, it is conceptually possible to follow a path including:
- RTPG (simulation) based block validation, using Verisity’s Specman, Synopsys’ Vera, etc.;
- model checking based block validation, using property checking tools like IBM’s RuleBase, Prover Technology, TransEDA’s ImproveHDL, Averant’s Solidify, etc.;
- ABV on the system integration by simulation, or by acceleration, after synthesizing assertions in hardware.
Potentially an assertion library should contain different views, each representing the same property in the different languages accepted by the best-in-class point tools. For this reason, synthesizability of assertions is a key feature.
As of today many different languages still exist in this domain (PSL, OVA, e, SVA, PEC, etc.) and for PSL different subsets are accepted by different tools. Indeed the effort of standardization is far from being completed.
Concerning integrate tool based solutions, during the last year, several tools had added some ABV capabilities to their features; the main offerings are reported in table 1.
Table 1. Main ABV tools availability.
Synopsys offers, to our knowledge, the most integrated solution: both the simulator and the formal tool share the same languages, so that the same properties may be used in both environments, hence allowing the appliance of most of the principle exposed before. Nonetheless, two main issues are still open: PSL support, and mixed language (Verilog/ VHDL) support. In our opinion, the missing PSL support could limit the deployment of Synopsys’ solution, as it avoids reusability of properties through different tools.
Mentor ha recently enlarged its offering by adding ABV capabilities to its simulator and, at the same time, by acquiring the 0-In company and its formal analysis technology. Although it appears to be a promising solution, it has still to gain maturity as, for example, VHDL is not fully supported yet.
Cadence is in a particular position, as the current offering only includes the simulator, with ABV capabilities. However, the support of PSL could ensure good interoperability with other tools, provided the PSL subsets match.
IBM’s RuleBase could represent a complement to Cadence’s simulator, as both tools support PSL. Unfortunately, as the subset accepted by the two tools do not match, this is not completely practical.
Finally, Specman must be mentioned, as it contains a temporal layer that, in principle, could be used for ABV purposes. In practice, as far as we can see, this feature is not much used; moreover, properties written in temporal e are not reusable with any other tool, hence limiting the appealing of this approach.
IP reusability has been a key issue in SoC design for many years. We have faced many situations where IP’s have being developed for a specific application but with the emphasis on reusability. Formal verification results a key approach to guarantee the correctness of the IP versus the specification.
 M. Abadi and L. Lamport. Composing specification. ACM Transactions on Programming Languages and Systems, 15(1):73–132, January 1993.
 Accellera. Property Specification Language Reference Manual v1.1, June 2003.
 Accellera. SystemVerilog 3.1, Accellera’s Extensions to Verilog, 2003.
 Sagi Katz, Orna Grumberg, and Danny Geist. Have I written enough properties? - a method of comparison between specification and implementation. In Proceedings of CHARME 1999, pages 280–297, 1999.
 K. L. McMillan. A methodology for hardware verifi- cation using compositional model checking. Science of Computer Programming, 37(1–3):279–309, May 2000.
 A. Pnueli. In transition from global to modular temporal reasoning about programs. In Krzystof R. Apt, editor, Logics and Model of Concurrent Systems, volume 13 of NATO ASI, pages 123–144. Springer-Verlag, October 1984.
 Synopsys inc. OpenVera Reference Manual 1.3, March 2003.