Attacking the Verification Challenge: Applying Next Generation Verification IP to PCI Express-based Design (by N. Mullinger, J. Hopkins & R. Hill from Synopsys)
Introduction
Today’s IC and system-on-chip (SoC) design trends have placed an immense burden on the verification engineer’s shoulders. Processor complexity, custom logic content, software content, and system performance are all increasing, while schedules are being squeezed and resources stretched. The now-famous Collett study shows that 70% of the effort for complex ICs is spent on verification. Not surprisingly, project teams are looking for more effective methods to verify their designs.
Verification engineers are consequently looking towards new methodologies to reduce testbench development time, and speed up the time it takes to completely verify their ASICs or SoCs. Directed tests and ‘golden’ reference files will soon be regarded as the more primitive tools of the modern test environment.
Constrained random test (CRT) methodologies allow engineers to rapidly test their designs across a range of parameters and assist in creating testbenches that are adaptive and reactive. Instead of specifying each event to exercise the design, the engineer specifies ranges, within which the testbench then exercises the target device. Feedback from monitors and models identifies test suite hits and allows the testbench to adapt and check new areas. This new functionality in the models replaces much of the effort associated with manually creating vectors to accurately reflect system behavior. Constrained, randomly generated vectors are much more likely to hit unexpected corner cases in the design.
Designers are also migrating towards industry-standard buses to improve reuse, interoperability, and to broaden market opportunity. This creates an additional burden on verification engineers who must test whether designs conform to these standard bus protocols and are still interoperable with other modules and designs that use the same standard.
Smart verification models not only save an enormous amount of testbench development effort, but also begin to move the verification engineer towards higher-level testbench functionality with constrained random test verification methodologies.
Smart verification models, coupled with hardware verification languages (HVLs), let engineers take the next step beyond traditional HDL-based verification into self-checking, automated testbenches. Object-oriented HVL’s gives verification engineers’ features and capabilities above traditional HDL’s. HVL’s provide functional coverage analysis, random stimulus generation, property verification, and data and temporal checkers. Leveraging the pre-built testbench functionality of the HVL’s, in conjunction with smart verification models that provide the IP-specific behavior and functionality, arms the verification engineer with the tools needed to quickly generate testbenches and thoroughly test the design.
Role of Smart Verification Models
The first commercial verification components became widely available in the mid-1980’s, driven by a need for full-functional models used in board-level verification. They included full functional microprocessors that executed microcode, but which suffered from serious performance limitations. Over time, it became apparent that higher levels of abstraction were needed to deliver the required performance. Consequently, bus functional models were developed that were accurate at the boundary of the component and were driven by commands from the user rather than through the execution of software instructions. These models ran an order of magnitude faster than full functional models and have been used for many years to exercise system behavior.
This same method was later applied to the creation of bus models that stimulate and respond to transactions on the bus. They allow ASIC verification engineers to mimic bus behavior and verify that a design interface will communicate with the rest of the system, while keeping simulation performance high. Monitors check for protocol violations and provide feedback on coverage. PCI, PCI-X, USB, and Ethernet are some of the most widely used of these models.
Today’s modules, components, and systems can process massive volumes of data. This creates the need for faster, more effective data communication and is driving a new set of more complex, faster buses, like PCI Express™, CoreConnect™, RapidIO™, HyperTransport™, 10G Ethernet, and so on. The complexity of these protocols has increased to include a huge number of conditions and states, rendering directed testing alone insufficient. Creating a test suite for these protocols is a major effort that detracts from the primary need to verify custom logic and system behavior.
To solve this problem, bus protocol models are now evolving to even higher levels of abstraction that include constrained random test methodologies, typically found in high-level verification languages. The new enhanced verification models use their protocol knowledge to drive transactions onto the bus within the user defined application specific needs of the design. They allow engineers to build adaptive, reactive testbenches that preclude the drudgery of directed tests and hard-to-maintain ‘golden’ reference files. Creating a test suite now becomes significantly simpler for verification engineers who don’t need to spend time learning the details of the protocol and weeks or months writing a directed test suite. The advantage of using these enhanced models is immense and results from both simplified usage and improved coverage.
Methodology
In the case of a bus standard such as PCI Express™ that has an upstream and downstream topology, the engineer can easily create a virtual system consisting of a root complex device, multiple switches, and endpoints that mimic the behavior of a system in which the device resides. Rather than writing many hundreds of specific commands to drive specific transactions onto the lanes at specific times, a series of constraints can be developed instead. These constraints configure the Requester endpoints to behave within the confines of a Requester device that will eventually be used in the system. The Requester then generates transactions using data from many sources. The Requester essentially acts as a data pump that initiates activity on the lanes within the relevant confines of the system. ‘Weights’ (probabilities) can be applied to bias the transactions towards the behavior of the actual system. Completer endpoint devices can similarly be configured to respond in a constrained, random, and weighted fashion to the requests that are initiated by the Requester. System behavior can be explored by changing the constraints and/or the weights of the response, resulting in either a realistically driven system or a worst-case driven system.
Monitors watch for protocol violations and log transactions, and provide coverage statistics. An application programming interface (API) allows dynamic access from the testbench to check for specific coverage points. The new generation monitors can be programmed to look for specific sequences of transactions that are added to the coverage list. Within a sequence of transactions, users can implement directed choices. For example, the monitor may be programmed to detect the following: a Memory Read, followed by a NAK, followed by Retry of Memory Read, followed by Completion of Read. The monitor would log this as a coverage “hit”.
All models and monitors can give asynchronous notifications to the testbench, which allow continual feedback to enable reactive behavior in the testbench.
This functionality is available today for users of high-level verification languages such as OpenVeraTM. The new smart Verification Suites that have been developed by Synopsys are designed to bring constrained random functionality to the users of any verification environment, including Verilog and/or VHDL users.
A Design Example with PCI Express
To illustrate this, take a real world example of why verification requirements are changing with the complexity of the systems being designed today. The design is based on PCI Express, which is becoming increasingly popular as an SoC interface. The PCI Express design consists of a Root Complex device connecting the graphics, memory, and CPU. The Root Complex is connected to switches that arbitrate the Endpoints. The Endpoints can consist of a PCI Express endpoint or other protocols bridge points such as USB, PCI, or Ethernet. This is common among today’s new system designs, but it adds considerable complexity to the verification task because the complex interactions and data flow between the devices must be modeled and verified, as well as the basic transactions.
Figure 1
The example design, shown in Figure 1, consists of a CPU, and a Root Complex with PCI Express Endpoint and Memory attached to it, two Switches which arbitrate a USB Host, a PCI Bridge, and a PCI Express Endpoint. Just focusing on the PCI Express Endpoint devices, the devices can be Requesters, Completers or a combination of both. The Requester’s purpose is to initiate PCI Express transfers. Read and Write transfers can be generated for memory, I/O, and configuration address spaces. The Requester can create single word Read and Write requests to the memory, I/O, and configuration address spaces along with Block transfer Read and Write for the memory space. The Requester can also generate Messages. The Completer’s purpose is to respond to PCI Express requests. The Completer provides internal memory, I/O, and configuration spaces for storing write data, and supplying read data when transaction requests are received.
Using standard HDL testbench techniques for verifying the design will not be practical to provide sufficient coverage in an acceptable time frame. CRT capabilities allow the verification engineer to quickly and efficiently create an extensive test environment to thoroughly test the Switches and Root Complex. CRT augments the traditional techniques used to validate basic functionality, i.e., checking the address mapping, walking ones and zeroes, and reading and writing patterns to memory. The goal of these new advanced features is to shift the cycles spent on verification from testbench creation to simulation verification time. Given the potential number of transactions, directed testing is not a viable option for completely verifying this kind of design.
The models need to be configured to automatically respond to the control and data applied by the rest of the design, or configured to generate packets for the design. Focusing on the Requester devices, the steps required to configure the Requester devices are:
- Define a transaction generator
- Specify the type of packet, random weighting, number of words for a burst, address range, and payload data
- Assign the transaction generator to a specific Requester device in the simulation
- Create a payload for the buffer
In a few steps, a Requester device can be programmed to generate packets for the simulation. Figure 2 shows sample code that a verification engineer would use to configure and program a Requester device for random packet generation in the testbench. With a few configurations parameters, the Requester will generate constrained random PCI Express packets.
When simulation starts, the Requester will be configured to generate packets according to the given constraints. It will continue to generate packets this way until one of three things happens: the simulation is terminated; the payload is exhausted; or the user loads a new configuration into the Requester device. This aspect highlights the possibilities for self-checking, intelligent testbenches.
Requester1;
RequesterPacket xact;
VmtRandomPayload payload;
integer wp_handle;
Requester = new(“Requester1”, RequesterBind);
// Do an equal number of reads and writes with
// 20% I/O, 20% Config, 55% Memory, 5% Messages
// For Memory, 20% 1 dword, 40% 4 dword, 40 8dword.
// Set address ranges corresponding to the Completer devices on the bus
xact = new(Requester,
“XFER_TYPE READ=50%, WRITE=50%;
XFER_CFG I/O=20%, CONFIG=20%, MEM =55%, MSG=5% ;
if (XFER_CFG == I/O )
XFER_SIZE 1=100%,
ADDRESS = 32’h0000 to 32’hffff;
if (XFER_CFG == CONFIG)
XFER_SIZE 1=100%,
ADDRESS = 32’h0000 to 32’h00ff;
if (XFER_CFG == MSG)
XFER_SIZE 1=100%,
ADDRESS = 32’h0000 to 32’hffff;
if ( XFER_CFG == MEM )
XFER_SIZE 1=20%, 4=40%, 8 =40%;
ADDRESS = 64’h00000000 to 64’h10000000;
end if
// it with the transaction generator.
payload = new(-1);
xact.setPayload(payload);
if ( XFER_CFG == MEM )
Requester.new_buffer(`DW_VIP_PCIE_TX_MEM_BLOCK, Buf_Handle4);
if (i < XFER_SIZE)
Requester.set_buffer_dword(Buf_Handle4, 16’h[i], payload);
i = i +1
else
i = 0
end if
end if
// You will need to create buffers and payload for all
// possible packet configurations
// Set up a watchpoint to catch the end notification on all
// transactions.
xact.setEndNotifyId(1);
Requester.VMT_CREATE_WP_TRANSACTION_NOTIFY(1, wp_handle);
fork {
// Start generating random transactions.
// Transaction generation will not end on its own since
// the payload never runs out.
Requester.startTransactions(0, xact);
}
Figure 2.
initial begin: notify_handler1
@( posedge SystemClock_r)
// watch for a packet entering the link layer
Requester.create_watchpoint(`VMT_MESSAGE_ID, `PCIE_MSGID_TX_TLP_TO_LINK_LAYER, wp2);
Requester.set_watchpoint_trigger(tx_sid_req, wp2, `VMT_WP_TRIGGER_PARAM, `VMT_WP_TRIGGER_HANDSHAKE);
forever
begin
Requester.watch_for(wp2,info);
Requester.get_watchpoint_data_int(info, `PCIE_MSGID_TX_TLP_TO_LINK_LAYER_ARG_CMD_HANDLE, value, status);
if (status == `VMT_INVALID_HANDLE)
$display("There was an error extracting the command handle for the TX_TLP_TO_LINK_LAYER notification!
");
else
$display( "n*** transaction layer packet in the Tx queue passed the flow control gate and enters the link layer *** %0d at time %0d
",value,$time);
end
end
Figure 3.
Constrained random test capabilities, coupled with advanced analysis features like watchpoints and coverage metrics, allow the creation of self-checking testbenches. This new generation of verification models brings advanced testbench capabilities to all designs, and operates with the testbench language of choice: Verilog, VHDL, or OpenVera™. The verification model is a protocol IP block for the testbench. Testbench functionality is incorporated into the behavior of the model, and the model can act as an interrupt mechanism into the testbench.
The real value of these new verification models is what they do for the verification engineer. These models allow the verification engineer to quickly create sophisticated, self-checking testbenches for the system under test. Configuring the models to generate/respond in a constrained and random way can be accomplished in a short amount of time, enabling the verification task to be started sooner in the verification process. More time is spent in simulation as a result. Using constrained random transactions fed by various sources (random, testbench generated, application-specific) exercises the system in a more realistic way and translates into more scenarios being covered.
Summary
Synopsys interface protocol models can save months of time in the development of a testbench environment. verification engineers using any language can gain access to constrained random technology, leading to more effective system verification. Synopsys models do not force a major change of methodology--they can be used for both directed and randomized testing. Synopsys bus protocol models have been successfully used and proven on hundreds of designs. These models are included in the DesignWare Library and the DesignWare Verification Library.
©2003 Synopsys, Inc.
Synopsys and DesignWare are registered trademarks of Synopsys, Inc. OpenVera is a trademark of Synopsys, Inc. All other brands, products or service names mentioned herein are trademarks of their respective holders and should be treated as such. All other products or service names mentioned herein are trademarks of their respective holders and should be treated as such. All rights reserved.
Related Articles
- Learning Not to Fear PCI Express Compliance Using a Predictable, Metrics Based Verification Closure Methodology
- PCI Express verification underscores need to plan
- Attacking the verification challenges: Applying next generation verification IP to bus protocol-based designs
- Attacking the Verification Challenges: Applying Next Generation Verification IP to Bus Protocol-based Designs
- RISC-V's CPU Verification Challenge
New Articles
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
- Timing Optimization Technique Using Useful Skew in 5nm Technology Node
- Streamlining SoC Design with IDS-Integrate™
- Last-Time Buy Notifications For Your ASICs? How To Make the Most of It
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
- Design Rule Checks (DRC) - A Practical View for 28nm Technology
- Layout versus Schematic (LVS) Debug
E-mail This Article | Printer-Friendly Page |