Amit Tanwar, Mentor Graphics
The technology industry presents numerous time-to-market and design quality challenges. Achieving one at the cost of the other is all too common, however this tradeoff isn’t inevitable. A focus on “right verification” can boost both productivity and quality. The successful design of any application from an individual intellectual property (IP) block to an SoC (System-on-Chip) depends upon the right verification. With each move from the chip level to board level and finally to system level, the cost of detecting a defective chip increases by 10X. Sometimes it is not easy to judge the correct verification methodology, and choosing the wrong one may cause trouble later. Verification IP, (VIP) provides the means to get this right.
VIP is nothing more than a model that provides a means of user interaction at different levels of abstraction of the underlying design. Choosing the correct VIP, which involves judging its recipe and verification methodology, is as tough as choosing a design IP. The right methodology improves the ability to abstract underlying complexity while increasing reusability, ease of debugging and bug fixing, and maintenance. Understanding how to build a lasting VIP is a challenge given changing technologies and market conditions. This paper will emphasize how a recipe that combines methodologies provides a superior VIP, one that reduces effort and debugging time. While avoiding reference to specific EDA tools when possible, the article also describes the creation of VIP flow at a conceptual level.
Where is the Need?
Sometimes design teams need to invest in in-house verification rather than relying on a third-party VIP. Despite the potential benefits of targeting a specific application, a build-your-own approach to verification can still sap productivity and quality if it is written without understanding the correct verification path. The alternative, using third-party technology, still requires understanding of verification architecture.
Verification IP and Methodology
The first step is understanding what comprises a VIP recipe. Included in the recipe:
- Verification language that is widely accepted and proven;
- Architecture, including the different components. (Components can be viewed as a single entity, performing independent tasks. A typical verification IP includes a bus function model [BFM], a monitor, coverage, and a scoreboard as major components.)
- Different value-add tests that show usage and connection;
- Automation, a test plan, and other documentation.
If properly applied, the above elements can combine into a methodology that is well structured and reusable. If not, the methodology can lead to sluggish, hard-to-debug verification.
The Conventional VIP Design Flow
Increasing design complexity creates several methodology challenges. These can be mitigated by an understanding of VIP flow.
A VIP flow starts with comprehensive project plan, which can only be created after complete understanding and documentation of all specifications. Ample time for team discussions is the key given all the interdependent decisions about everything from resource allocation to delivery dates. Next comes the design phase, which involves creation of a document specifying VIP functionality and features. Mapping the VIP in a basic flow chart can help to organize the document, which should include a section describing VIP features at the user level. Implementation follows the planning and documentation phases. Self-review and optimization of code is a critical to improving code quality, while group review helps to ensure common look and feel. Next comes, validation, testing and final product preparation. After integrating all components, it must be validated against a third-party DUT or other in-house IP. This validation helps builds the confidence in the functional correctness against external IP. This phase also includes verification of all features in all modes against external IP, if possible, or via out of box testing. The final product phase includes creation of various test scenarios regarding connection to the DUT, along with a user manual, scripts and regression analysis.
It’s important to stress this is a generic VIP flow, which can include additions and/or deletions based on tools, languages and resource requirements. What shouldn’t change is a commitment to a methodology to follow, discussed in the next section, to work through all these phases.
Figure 1: VIP Design flow
The Role of Methodology
Methodology is essentially a cookbook. Follow the recipes to build a comprehensive VIP that embraces reusability. Among the recipes:
Bus Function Model
Bus function model is a soft model of the protocol written in HDL language and used to exercise the design IP. As an example: the BFM of a symmetric protocol like Ethernet will have one functional block of transmitted logic and one of receive logic. The transmit logic will convert an abstracted transaction to the pin level, while the receive logic will build the transaction from the wire. For an asymmetric protocol like AXI there will be separate transmit and receive logic functions for master and slave devices. Among the considerations to building efficient BFM:
One way to embrace reuse is to combine components of similar functionality (monitor and receiver) up to appropriate level of abstraction. Consider the case where a transaction needs to be printed by monitor, considered in coverage, checked for protocol violation and responded by a receiver. In all cases the transaction is essentially the same, though each of the components is user in their own way. So rather than each component sitting at the wire level and forming the transaction, a single receiver device can be created to pass the transaction to different components sitting at the transaction level. Here one can use standard methodology, to be discussed in the Library section) to provide TLM communication among components. As another example of reuse consider a case where a methodology (or tool) allows one to write a single code that can be declared an array of two and that can work for both the transmitter and receiver.
A simple Verilog task can help illustrate the point. First, consider:
Now, consider a methodology that allows the task to be declared as an array of two:
The above code, which may require modification, can be utilized as a combination of two codes. Task drive_address will drive value on addr_out and drive_address will drive value on addr_out. Here addr_out may be represented as address pin for the transmitter and addr_out for the receiver. A BFM configured for both ends as TLM will drive both transmit and receive values for testing purposes. Other usage scenarios are briefly discussed in the Reverse Code section.
2. Abstraction Level
Abstraction levels provide the user well-structured flexibility to variously tweak the stimulus. Consider the case of open core protocol (OCP), where abstraction can be based on protocol.
Figure 2: OCP hierarchy
Burst transfer is at the highest abstraction level. It finishes when the request, data and response phase completes. Going to the next lower abstraction level affords the user more control, such as the ability to initiate multiple requests without waiting for responses or having interleaving phases. At the cycle accurate level, there is fine grained control over signals with respect to timing and an ability to create error injection scenarios. Further down in the hierarchy, where it’s necessary to write complete code for initiating a transaction, the complexity of the test scenario increases.
3. Reverse Code
This is approach is similar to the one discussed in the Reuse section. Consider a code which is written to convert a transaction to wire. Now, what if the same code works in reverse fashion? That means the same code converts wire level activities to form a transaction, obviating the need for separate receiver code. Some tools are available to support this methodology and more fully enable reuse.
Among the many challenges of code debugging is locating the bug in the first case. A robust GUI can make this task easier and quicker, particularly if everything from the transaction- to the wire-level is available. Even better: linking every top level transaction to its child down to the wire level. For protocols where concurrency leads to out of order and multiple responses, this sort of well articulated GUI is quite helpful.
Serial protocols like PCI-Express, where ten bits form a symbol and several symbols form a layer packet, also yield to GUIs, given that it is very difficult to debug at wire level. Viewed packet formation in the GUI for all fields can greatly reduce debugging time. (Note that this is as much a tool feature as a methodology; no matter what it’s called, given the benefits, it’s still useful to incorporate GUI-based debugging into a VIP flow while creating a BFM.)
Figure 3: Transactions in GUI
The precise VIP structure can vary with each design, however. some components or bits of logic can be reused. This we call a common or base library, which can be imported nearly every time a VIP is developed.
1. Component Reuse
Usually all library components are available and packaged in common languages like SystemVerilog and VHDL. Reusing components not only saves time, but also helps to structure the code structured and reduce chances of introducing bugs. Some standard libraries should be openly available for component reuse. Among these: libraries regarding different ways of printing transactions commonly used encoding-decoding algorithms, and configuration setting mechanisms. Other libraries can be created on top of a standard open library to add more common-denominator features.
2. Transaction Level Modeling (TLM)
HDL languages provide signal-level communication. Some object-oriented languages allow for encapsulation, which helps in making a transaction. Transaction-level communication is very much a requirement in any VIP flow. Using a fairly strict verification methodology along with a component library can provide a base for building the transaction-level path among components. A transaction traverses from stimulus device to protocol driver through TLM ports. The monitor similarly broadcasts the transaction to analysis components, like a coverage map and scoreboard. The communication at the TLM is relatively easy to debug as it incorporates many concepts from object oriented coding. This methodology also fits into a constrained random and coverage driven environment.
Consider a SystemVerilog example in which a class is defined as a transaction; (members of the class are transaction items constrained by protocol). This class is first randomized and then passed to a protocol driver through a TLM path where it is further processed for transfer to the wire level. The net result: a library employing TLM concept adds value to the product from different angles.
Figure 4: TLM Communication
Several methodologies can be used in constructing a test bench, which consists of instantiated VIP components and a target DUT. These methodologies help in refining different random, directed and erroneous test scenarios to catch DUT issues. Here are several areas where different methodologies can be useful.
Traditional stimulus can be directed or random. Project requirements, and occasionally the whims of the verification engineer, dictate which one is run first. Employing pure random stimulus is time consuming due to repetition of test vectors. Pure random approach may also cause an illegal stimulus, which can be controlled by protocol-based constraints. Excessive use of such constraints may be problematic, however, as it may result in an inordinately small solution set and thus poor coverage. Random stimulus can fail to catch corner cases and other scenarios, which is why it’s usually necessary to also write directed test cases. SystemVerilog allows for use of constrained random methodology. Writing a separate block of code can help avoid repetition of test vectors and, because the code can be reused in every protocol, can lead to a fast stimulus generator. Another way of minimizing repetition is to use the feedback from coverage in stimulus. Constrained random can also be used to make directed test cases.
Examples that can be created by putting right constraints in a transaction class include: cycles on incremental address, write followed by read, and random error insertion. However, certain corner cases which require directed coding. For example, in PCI-Express the LTSSM state machine can be checked only with directed stimuli, though it is possible to incorporate some random methodology to boost efficiency. To do so, create small sequences of directed tests, which can then be randomly picked and executed. Some packet fields can also be randomized. In short, a constrained random methodology can address most issues of stimulus generation.
2. Target (DUT)
Formal verification, which usually incorporates assertion-based methodology, is another way of verifying a DUT. Assertions represent the behavior of protocol in a well defined way and help address three purposes:
When connected as a checker, it checks for protocol violation on the bus and locates the bug area in a very descriptive way.
- Act as random stimulus generator
When connected in constrained mode, all assertions targeting the DUT become constrained to generate random stimuli (just like constraint random methodology.)
Different checker properties can address issues of coverage, as well. An example: PCI-Express where a property can be written for a SSPL message onto the bus; this property can be used as coverage for the SSPL message. And to check whether any field of SSPL is corrupted, another property can be written which will use first property as a starting event.
Coverage plays an important role in verification closure, an issue that is addressed by SystemVerilog’s coverage-driven methodology. Reading specifications can help in understanding test scenarios and coverage items, which measure the percentage of completed verification. A further step is link the test plan with the coverage item, which is easy enough when the plan document is in Excel or Word. Coverage results should be accessible both in a log file and via GUI.
Figure 5: Coverage in GUI
Assembling a sound VIP involves disciplined consideration of various parts of the underlying design, each of which can require a unique verification methodology. It’s highly nuanced, labor intensive work. However, the payoff is the creation of a VIP that not only help to reduce errors early in the design phase, but that also may be reused, at least in part, in subsequent projects.
- It’s the Methodology Stupid! (PC) by Pran Kurup, Taher Abbasi & Ricky.
- Open Verification Methodology Cookbook by Mark Glasser.
About the Author
Amit Tanwar is a Lead Member Technical Staff at Mentor Graphics, specializing in the development of Questa MVC and Questa Verification Library (QVL). He received his B.Tech from IP University Delhi.