by Mick Posner, Product Marketing Manager
It’s common knowledge that the verification stage for a given system is around 70% of the overall design effort and schedule time. Reducing overall time spent in test creation and design verification is a high priority. Success in these two areas increases productivity and helps deliver products to market faster. To achieve these verification goals, engineers are constantly looking for new and innovative ways to conquer the verification challenges that face them.
This article discusses a layered verification approach as applied to an AMBAbased system component. The layered approach is used to create a standardized verification environment that can adapt as the design challenges increase. Typically, reuse is very high within an AMBA-based system because many new designs are based on earlier versions of the standard system. The example shows the layered approach being applied to verify an individual block as well as its integration into the subsystem and final system representation.
The example AMBA-based system in Fig 1 shows user created (red), and third party intellectual property (IP) blocks (blue), connected to both the AMBA AHB and the AMBA APB buses. The layered verification approach can be applied to both the AHB and the APB busses as well as across bus boundaries. The same approach can also be used for multi-layer designs and the new Advanced eXtensible Interface (AXI). The system is simplified for graphical layout. The example focuses on the user-created block connected to the AMBA AHB bus.
Figure 1: Example of an AMBA System Under Verification
Verification Environment Goals
- Support verification of standalone block level components
- Support subsystem component integration and subsystem verification
- Support a full system context integration and full system verification
- Improve overall coverage
- Offer fast integration of third-party IP blocks
The verification environment itself must be evolutionary so that it can be reused in subsequent verification projects.
To achieve the stated goals, a layered approach to verification is required. The layered approach is applied to individual block-level verification as well as at the subsystem and full system level. Each layer of tests builds on top of each other so moving from layer to layer requires minimal effort. Each layer of tests is portable so they can be reused at a subsequent layer or within a new verification project.
There are fundamentally three layers. The Layer 1 tests target interface protocol verification. Layer 2 and 3 target applicationspecific logic verification using realistic data traffic generation.
Figure 2: Verification Layers
The goal of layer 1 is to test the physical bus interface and ensure that it does not violate bus protocols. The interface must adhere to the defined protocol, AMBA AHB or APB in this scenario. The layer 1 tests are a set of directed tests that check to ensure that all of the buses’ different cycles can be correctly executed. Individual tests are created to exercise specific areas of the buses’ protocol. Once all basic transactions have been covered, layer 2 tests can begin.
The goal of layer 2 is to generate transaction sequence tests that not only stress the bus interface logic but also target the application-specific logic. Layer 2 tests are structured to generate realistic design traffic. To fully achieve the layer 2 goals, constrained random techniques must be applied to the verification environment.
The main benefit of using a constrained random approach to achieve the layer 2 goals is that it’s easy to achieve the first tests. With a couple of simple bus functional commands, bus cycles are generated. High bus cycle and functional coverage are achieved very quickly and more corner cases will be found. The coverage statistics will be far more complete than what could be achieved using directed tests only. This constrained random environment is able to generate huge amounts of stimuli from a minimum of testbench code. As it is constrained to the design requirements, simulation cycles are not wasted by inadvertently activating unnecessary sections of the subsystem. The constrained random traffic will stress the design block under verification far more than directed tests can. The real representation of the traffic also thoroughly tests the blocks’ application-specific logic in a manner much closer to how the physical silicon will act.
Example of Constrained Random Stimuli Applied to Simple AMBA AHB Cycles Constrained Ran-dom Transactions (CRT) functionality enables a spread of cycles to be generated in a simple and quick fashion. For a given AHB transaction, the transfer type, address, size and burst type can be constrained.
For example, an AHB master could generate a mix of read and write cycles of varying sizes and burst widths. Combinations of read transfers 10% of the time, write transfers 70% of the time, and idle the remaining 20% can quickly be constructed. Coding this in a traditional HDL is very difficult.
A fully constrained random environment is defined as a set of transactions with a layer of sequences above that, then a layer
of choices sitting above with the final layer being the transaction constraints. The payload is fed into the system, which creates an autonomous stimuli generator. Individual transactions are joined together to create a sequence. Sets of sequences are joined together to create a choice. Sets of choices will produce a wide variety of transaction cycles and responses.
Figure 3: Example of Constrained Random Transaction Generation
Layer 3: Application Specific Tests
Tests at layer 3 are used to raise the overall confidence in the design’s stability. Full sign-off scenarios are run, which include system and application boot sequences. The software-to-software interfaces can be checked at this level. The final application API’s and drivers are tested and a more focused performance trial can be executed.
Now, a full context validation can be achieved. The layer 3 tests target the higher- level functions of either the individual block or system. Testing at layer 3 typically uncovers bugs that traditionally have only been uncovered when the final product was already available in the lab.
The goal of the Layer 1 test is to fully verify the block’s interface while the goals of layer 2 and 3 tests are to verify both the interface, but more importantly, the block’s application-specific logic. All access for these tests is via the AMBA bus. The layers are applied at each level of the design verification process from blocklevel verification to subsystem and finally full system verification.
Figure 4: Layer Test Functionality Target
Applying the layered verification approach to block level verification
Before the AMBA-based system can be constructed, individual blocks have to be functionally tested. The individual blocks are verified in a standalone environment and tests are generated at each of the layers. This is designed to capture fundamental functional errors before subsystem integration begins. At this level, the testbench can be written in either a traditional HDL, such as VHDL or Verilog, or in a hardware verification language, such as OpenVera‰. OpenVera is designed to ease testbench creation and is structured for verification automation using a tool such as Synopsys Vera‚, thus easing the overall verification tasks.
At the block-level verification stage, the layer 1 tests check that the block can be accessed via the defined AMBA interface. The block must conform to the AMBA protocol before subsystem integration can start. All bus cycles must be checked to ensure that the block will function when connected to other blocks in the AMBA system. The best way to achieve this in the shortest time and with maximum coverage is to drive the user logic block with verification IP such as Synopsys’ DesignWare® Verification IP.
Figure 5: DesignWare AMBA Master and Monitor Used During Layer 1 Tests
In this example, the AHB master verification IP is used to generate the directed read and write tests that will fully exercise the AHB bus interface on the user-defined block. A monitor is used to check that none of the AMBA protocol is violated, as well as to capture bus cycle coverage information. Monitoring the coverage lets the verification engineer know how much of the bus interface has been tested. A goal of 100% bus coverage at layer 1 using directed tests may be impossible to achieve. A more realistic target at layer 1 should be 100% bus cycle or transaction coverage. This goal ensures that the block can respond to most of the AMBA cycles and is achievable in a minimum amount of time.
To achieve higher functional coverage, the layer 1 test environment can be quickly expanded to support the layer 2 tests. At layer 2, more realistic bus traffic data is required to test the blocks functionality. To fully achieve the layer 2 goals, constrained random techniques must be applied. Creating random traffic at layer 2 is very important because it stresses the bus and reveals corner cases that may have not been considered. Generating directed tests to do this would not only take a great amount of time, but would not mimic a real AMBA environment. These sequences of transactions begin to test the application- specific functionality of the userdefined block.
Heavy AMBA accesses can be performed and block-to-block data integrity can be checked to test the user-defined block in a more complete fashion. Sequences of transactions should be generated to check the user-defined blocks’ functionality in the context of a more complete system.
The goal of the layer 1 tests, conformance to the AMBA protocol, is still valid at layer 2. The layer 2 tests will generate a more complete coverage of the AMBA AHB protocol. The constrained random stimuli will produce tests that get closer to the 100% bus transaction coverage goal.
Layer 3 application-specific tests can also be generated using this same setup. Tests can be created that mimic the real application’s functionality like cache access, DMA transfers and device boot configuration. The only limiting factor to running boot and application configuration tests are simulation cycles and wall clock time itself. Tests executed at layer 3 using a traditional full functional CPU model would require millions of clock cycles to complete. Using the AHB master verification IP reduces the required clock cycles and makes running the layer 3 block tests manageable. The overall advantage of the layer 3 tests is that they stress the block in a fashion that is closer to its final applications. Testing the register loading during a boot sequence, for example, will flush out functional bugs that could lead to a final system crash during boot up.
Applying the Layered Verification Approach to Subsystem Level Verification
Once the individual blocks have been fully verified, they can be integrated into the subsystem environment. Each level of the layered verification approach is applied to the subsystem environment in the same way that it was for block-level verification. The goals of each layer are the same, i.e., for layer 1, protocol checking, for layer 2, transaction sequence generation, and for layer 3, application-specific tests. Incorporating each block into a subsystem, tests not only the individual block, but also the subsystem construction itself.
The subsystem contains the real traffic generators such as PCI Express, USB and the physical memory and memory controller blocks. The blue blocks represent third-party IP that has been delivered as configurable pre-verified IP. These pre-verified IP blocks do not require standalone block level verification, because the IP provider should have completed those, but the block must be verified within the subsystem context.
Figure 6: The Subsystem Verification Environment
Layer 1 tests check that the subsystem has been connected correctly and that none of the subsystem blocks violate the AMBA protocol. The layer 1 tests created for the block-level verification task can be reused for the subsystem verification. The tests can be quickly modified to target a subsystem protocol check as well as individual unit checks within the subsystem.
The block-level constrained random transaction layer 2 tests can also be reused. The same AHB master constrained random tasks can be executed to generate tests within the subsystem. In addition to the AHB master, the newly added PCI Express and USB blocks shoulder most of the burden of creating the bus traffic. These blocks are DesignWare Verification IP, so only a small amount of ramp up time is required before the first tests are up and running. Constrained random techniques should be applied to these real traffic generators just as it was applied to the AHB master within the block-level tests. Very quickly this subsystem verification environment will generate huge amounts of design data that will thoroughly test both the interfaces of each block and the overall application.
Coverage again at this level is important. As the overall design under verification gets larger, generating tests to achieve the coverage goals gets increasingly more difficult. At the subsystem level, it is almost impossible to code directed tests that will stimulate all possible configurations of even the simplest of AMBA subsystems. Con-strained random transaction generation is the only way that users can generate enough subsystem tests to achieve coverage goals.
The subsystem level is the first place where the engineer gets to monitor how the userdefined block interacts with the other subsystem components. Both interface logic and application-specific logic bugs are uncovered. The layer 2 and 3 tests will flush out these bugs very quickly. The layer 1 tests will notify the user of protocol violations and the layer 2 and 3 tests will check to make sure all subsystem components react correctly.
The layer 3 application-specific tests run at the block level and can be expanded to cover multiple components within the subsystem. For example, a configuration of the PCI Express and memory controller IP can be tested to see if data can be passed from the PCI Express port and stored in external memory. Golden beginning and ending data images can be used to verify that the correct data was stored at the correct addresses. This test would also stress the bus fabric and arbitration logic.
Final System-Level Verification
At the system level, the layered approach can also be applied, but now a considerable amount of the verification burden is shouldered by the actual system-level components. In effect, the system representation generates its own tests because it models the actual final system. The existing layer 1, 2 and 3 tests from the subsystem verification environment can be reused to check that the system environment has been created correctly.
Within the system level, pre-verified thirdparty IP blocks should be used build up a complete representation of the system. Reusing IP blocks will reduce the effort required to create the system environment thus leaving designers more time to focus on their core competencies.
Figure 7: The System Verification Environment
At the system level, the verification goal is to check that the complete system acts as the specification requires. Bus protocol must be adhered to, but more focus is on higher-level application testing. At the system level, real application software is used to generate the test scenarios. This software is executed on a full functional software model rather than the AHB master model that was used at the lower levels.
The advantage of running the real application code is that it creates an environment that is as close to a real life system as possible. The disadvantage is that it is very hard to control bus cycles when software code is being executed. The code may not generate a complete set of bus cycles required to test all of the busses’ transaction protocols. This is why it is so important to re-run layer 1 and 2 tests within the system-level environment. Layer 3 tests mimic boot sequences and full system configuration.
All system-level tests should be portable to the real silicon. Software tests executed within the system-level simulation environment can be rerun on the real silicon to verify its function. Scenario differences in the real silicon operation can be replayed in the system-level simulation to ease the debugging effort.
Adoption of the layered verification approach will significantly improve the overall verification environment and raise the quality of individual AMBA-based designs and systems. Using the layered approach, users find bugs quicker at each verification level. By utilizing existing verification blocks, such as those found in the Synopsys DesignWare library, the effort of subsystem/system creation and test creation is dramatically reduced. By applying constrained random techniques to the verification flow, higher test coverage is achieved quicker than with standard directed testing alone. Each layer’s test within the block, subsystem and system levels are designed to uncover functional bugs that may have been close to impossible to identify using directed tests only. The end result is a more stable AMBA system with a higher probability of first silicon success.