Methodology Independent Exhaustive Constraint Solver for Random Verification and Regression Generation
Ravi Mangal, Puneet Khandelwal, Rajkumar Agrawal, Arvind Kaushik
Freescale Semiconductor India Pvt. Ltd. (NXP Group of Companies)
Abstract — Constrained random verification is a standard industry approach to test digital intellectual properties. Currently used randomization methods do not guarantee unique testcase with different randomization seeds and reproducibility of scenarios if testbench is changing. Moreover, control over exhaustibility of testcases is not provided by traditional approach so directed scenario need to be identified for coverage closure.
In this paper we propose a solution to solve constraints at top level and pass it on as parameters to testbench through command line. This approach gives control on parameter randomization which in turn eradicates possibility of producing duplicate set of parameters and hence saves regression run time. Similar set of scenarios is also reproducible through this approach using seed. Control over exhaustibility of testcases is provided by this method which helps in increasing quality of verification. Constraint solving can be done irrespective of verification methodology and programming language in which testbench has been written, so it reduces dependency on a particular skill set.
Keywords— Coverage, Constrained randomization, hardware verification, VLSI
I. INTRODUCTION
In design phase, a system on chip (SoC) is divided into several small interdependent blocks called intellectual properties (IP). Once all intellectual properties in a SoC are designed, they are integrated together and system is verified for written specifications. Frequency and data rate requirement along with shrinking nanometer technology is also increasing complexity in IP design. An example is evolution of common public radio interface (CPRI) protocol from version 1.4 to 6.1 as shown in Table I.
TABLE I. CPRI EVOLUTION
CPRI protocol version | Maximum Line Rate supported (Mbit/s) |
CPRI 1.4 | 2457.6 |
CPRI 4.0 | 3072.0 |
CPRI 5.0 | 9830.4 |
CPRI 6.1 | 12165.12 |
IP is reusable form of logic which needs to be verified before it is integrated into SoC. A verification testbench needs to be created in parallel while design work is in progress to shrink time to market.
Major components of testbench include testcase, checker and coverage as shown in Fig. 1. Testcase programs the design to test for a specific functionality and produces a specific output from the design side. Checker compares output of design with expected behavioral output and looks for errors in the design. Use of functional coverage is to check and sample functionality that has been tested in testcase. To verify different functionalities, many directed and random testcases are written and regression of testcases is fired on design. Once targeted functional verification number is achieved, verification of IP is closed and final delivery is made to SoC team.
Fig. 1: Verification testbench
Directed testcases are mostly written to verify important functionality of IP. Verification using directed stimulus is a time consuming process since predefined stimuli needs to be hardcoded in a directed testcase. Constrained-random verification method allows testing IP aggressively with unpredicted scenario and hence covering more functionality.
II. VERIFICATION FLOW
Verification process is started alongside with design of IP block. Verification planning includes compiling lists of directed and random testcases needed to run on IP to test major functionalities and use case scenario of IP in system on chip, as shown in Fig. 2. Planning also include design of checker and identification of coverage scenario to be written for verification sign off. After testbench structure and testcases are programmed, regression is run on DUT to check for functionalities and coverage is collected on clean run of simulation. To meet coverage targets, directed scenario are identified and corresponding testcases are produced and fired on DUT. In constrained random verification (CRV), it so happens that testcases fired through different seeds might be producing similar set of scenario and hence run of these testcases does not affect coverage numbers. Rather this results in increase in simulation time and elongated time to market.
Fig. 2 Verification Flow
The solution that we propose generates a regression list with randomized parameters which are to be passed from command line. This eliminates possibility of repetitive run of similar set of scenario because solution eliminates regression command with similar parameters. Moreover this solution generates parameters exhaustively, so for a given IP block, need of identifying directed scenarios can be completely avoided by generating an exhaustive regression list.
III. PROBLEMS WITH EXISTING APPROACH AND SOLUTIONS
Let’s understand the problem with existing verification approach and their solutions proposed by us.
A. Control over exhaustibility of verification
Traditional approach where parameters are randomized inside testcases does not give much control over exhaustibility because randomly selecting sets of parameters does not ensure that all possible combinations of those parameters are exercised. This leads to need of identifying left out scenarios and generate specific directed testcases in order to achieve coverage target.
This solution proposes to generate an exhaustive list of testcases at top level which exercises all possible combinations of parameters and pass them to a parameterized testcase rather than relying on internal constraint solver of testbench programming language. This eliminates need for identification and generation of directed tests. Moreover this approach takes only a few seconds to generate a regression list of more than tens of thousands testcases.
This approach also support a seed based regression generation which would be reproducible. Weighted randomization functionality is also supported by this approach.
B. Regression Run time
In traditional approach, randomization with different seeds does not always guarantee generation of unique set of parameters. This leads to situation where same set of parameters gets exercised in more than one run which causes waste of simulation time and does not affect coverage number.
Proposed solution eliminates possibility of duplicate set of parameters and saves simulation time which can be utilized to test more untested scenarios.
C. Effective use of employee skill set
Traditional approach requires developer to be proficient in testbench environment and related programming language to write constraints for CRV. This calls for employee skill ramp up process which consumes time and resources of organization.
Proposed solution eliminates barrier of using same programming language as that of being used in testbench. Understanding of testbench structure is also not mandatory for engineer. Constraints can be written on top level using any scripting language with which developer is comfortable with.
IV. CONSTRAINT SOLVING PROBLEM IN CPRI
Here we present a case of Freescale internal verification methodology which uses C++ based testbench environment for driving stimulus to IP. Testcases are written in C++ in this environment along with checker. This testbench accesses the DUT through back door and monitor sends coverage related information to coverage collector which has been written in system verilog. AXI interface has been used to write/read data from the target IP.
A. Problem Statement
We take example of CPRI IP where in data corresponding to multiple bandwidths can be transferred at different transaction sizes (TS_BWx) through radio links which operates at different line rate (LR) through many possible data paths supporting maximum numbers of antennas (N_AC) as per Table II.
TABLE II. MAX ANTENNAS SUPPORTED FOR DIFFERENT LINE RATES
Line Rate (LR) | Maximum Number of Antennas For Line Rate (N_AC) |
1.2G | 7 |
2.4G | 15 |
3G | 18 |
4.9G | 24 |
6G | 24 |
9.8G | 24 |
Keeping in mind limited size of memory buffer (MEM_BUFFER), we need to randomize parameters such as line rate and data path supported, transaction size, burst size of data to be transferred (BS_BWx) all along with randomly enabled antennas across different bandwidths (BWx_EN).
Number of antennas for a particular line rate should be less than or equal to number of antennas supported by CPRI (N_CPRI).
N _ AC <= N _CPRI
Sum of antennas enabled (N_AC_BWx) across all bandwidths should be less than or equal to N_AC
Σ N _ AC _ BWx <= N _ AC
Memory buffer consumed (MEM_BUFFER_BWx) by a particular bandwidth is equal to twice of transaction size multiplied with burst size summed together for all enabled antennas for same bandwidth.
MEM _ BUFFER _ BWx = 2*N _ AC _ BWx*BS _ BWx*TS _ BWx
Sum of memory buffer consumed by all bandwidths should be less than or equal to total available memory buffer. Σ MEM _ BUFFER _ BWx <= MEM _ BUFFER
Possible values of described parameters are mentioned in Table III.
TABLE III: POSSIBLE VALUES OF PARAMETERS
Field Name | Possible Values |
TS_BWx (In Bytes) | 32, 64, 128, 256, 512, 1024 |
BS_BWx | All possible values from 1-48 |
LR | 1.2G, 2.4G, 3G, 4.9G, 6G, 9.8G |
N_CPRI | 24 |
Data paths supported | 6 (L1, L2, L3, L4) |
Bandwidths Supported | 4 (BW0, BW1,BW2,BW3) |
MEM_BUFFER (In Bytes) | 8192 |
After number of antennas in all bandwidths is decided based on Table II, a 24 bit vector for all bandwidths is calculated which tells about the antenna position that has been enabled for all bandwidths.
Numbering of antennas enabled in all bandwidths is done in such a way that their placement in 24 bit vectors does not overlap, i.e. bitwise AND operation for both placement vectors always results in 0 value.
AC _ BW0& AC _ BW1& AC _ BW2& AC _ BW3 = 0
B. Testbench architecture comparison
Let’s look at architecture of existing testbench which support CRV through C++ based constraint solver as shown in Fig. 3.
Fig. 3: Constraint solver within C++ testcase
The proposed solution solves constraints at top level and passes them to testbench as shown in Fig. 4.
Fig. 4: Constraint solver sitting on top of testbench
C. Solution
Implementation of above problem at top level had been done using python and sample result as shown in Table IV is generated once script is executed.
The above mentioned example is just for illustration purpose and real problem which was in hand had around 200 programmable 32 bit configuration registers. Using existing C++ constraint solver, verification engineer was not able to stabilize interdependent constraints and achieve required level of randomization. Due to stringent project deadlines and expertise of team in python, idea to solve constraints on top level was conceptualized.
TABLE IV. SAMPLE CPRI CONFIGURATION
Parameter Name | Parameter Value |
LR | 1.2G |
N_AC | 7 |
BW0_EN | 1 |
BW1_EN | 1 |
BW2_EN | 0 |
BW3_EN | 0 |
N_AC_BW0 | 1 |
N_AC_BW1 | 3 |
AC_BW0 | 0010000000000000000000002 (209715210) |
AC_BW1 | 0000100000000000010000102 (52435410) |
TS_BW0 | 128 |
TS_BW1 | 64 |
BS_BW0 | 2 |
BS_BW1 | 9 |
MEM_BUFFER_BW0 | 512 |
MEM_BUFFER_BW1 | 3456 |
With help of this approach we were able to achieve randomization of more than 55 interdependent variables within a single testcase. A regression list of 10000 exclusive testcases is prepared in less than 10 seconds using this approach. Here is one example of run command from that list produced by script.
source <testcase_run_command> <log_file_path> RATE_6G RATE2_6G 1048576 0 20 11 16 4 14 13 1 8 6 15 22 10 9 0 3 12 21 23 17 5 2 18 7 19 5 15 22 1 9 16 11 10 20 19 2 12 21 23 13 4 18 0 14 7 17 8 3 6 LANE_4 32 1024 66 1 4288
V. SUMMARY OF ADVANTAGES
Solution has following advantages over traditional approach:
- Generation of exhaustive regression lists in seconds
- Reduced run time for regression and coverage completion
- Reproducibility of test scenarios remains unaffected by changes in testbench as randomization logic is independent of testbench
- Reduction in development cost and time to market
- Effective utilization of employee skills
- No verification methodology and language barrier
VI. CONCLUSION
Experience and results using the discussed approach suggest that solution is an effective constraint solver to generate regressions in random verification. As this is a time and cost effective solution, this can be embedded in existing verification methodologies.
If you wish to download a copy of this white paper, click here
|
NXP Hot IP
Related Articles
- Formal-based methodology cuts digital design IP verification time
- Efficient methodology for design and verification of Memory ECC error management logic in safety critical SoCs
- Metric Driven Verification of Reconfigurable Memory Controller IPs Using UVM Methodology for Improved Verification Effectiveness and Reusability
- Efficient methodology for verification of Dynamic Frequency Scaling of clocks in SoC
- Creating core independent stimulus in a multi-core SoC verification environment
New Articles
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
- Timing Optimization Technique Using Useful Skew in 5nm Technology Node
- Streamlining SoC Design with IDS-Integrate™
Most Popular
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- PCIe error logging and handling on a typical SoC
- Design Rule Checks (DRC) - A Practical View for 28nm Technology
E-mail This Article | Printer-Friendly Page |