SoCs: Supporting Socketization -> Verifying cores catches coding errors
Verifying cores catches coding errors
Verifying cores catches coding errors
By Michael T.Y. McNamara, Senior Vice President of Technology, Verisity, Mountain View, Calif., Jeffrey J. Holm, Communication Products, LSI Logic Corp., Bloomington, Minn., EE Times
January 3, 2000 (3:23 p.m. EST)
Verification is usually misunderstood. People think it refers to high-quality cores, but for socketization the real issue is core delivery. What assumptions about the target system were made? What were the integration rules? How is the core to be integrated into different application domains? How is the integrator expected to verify correct integration, especially if the core is driven strictly by the system and not the ports?
What's needed for socketization of a core is to deliver verification tool kits alongside the core, just as you might deliver synthesis scripts or timing diagrams. These tool kits make the integration rules executable to automatically catch integration errors during chip- or system-level simulation. In addition, they offer metrics to measure when the core integration has been sufficiently verified.
Verifying a core introduces some unique challenges. The key difference between a core and a standalone design is t he lack of knowledge about the surrounding environment that will host the core. The developer of the core must make as few assumptions as possible about the way his core will be used and the types of applications it will be embedded in. Still, the core developer needs to ensure a smooth integration.
One verification approach that proves to be a good solution to this lack-of-knowledge situation is random testing. If the core developer uses very directed tests to verify a design, he is guaranteed to not be testing all the possible ways his core may be used because he can only test what he can think of. Achieving all the possible combinations of inputs and protocols with all the combinations of different internal states of the core is impossible using the directed approach.
A more random approach, however, in which tests are generated automatically, can get a much better coverage of all these different scenarios. This becomes even more effective with constraint-dr iven generation, where you constrain some of the parameters of the environment to focus on corner cases and troublesome areas, and let the generator take care of the rest.
After the verification of the core is complete, the developer now needs to include the relevant parts of his verification environment with the core itself, for the end integrator. This verification tool kit needs to perform two tasks: First, it should make sure the integrator has obeyed all the integration rules and is using the core in a legal way. Second, it should make sure the system-level tests conducted by the integrator tested all the scenarios the core developer believes are necessary to ensure proper integration.
Supplying these two capabilities as printed manuals is clearly not the best way. The core integrator may misinterpret or overlook parts of the information there. Giving the checking and coverage measurements in an executable form as part of the core itself is far better. The checks make the core "self-chec king" and will issue an error whenever the integrator violates an integration rule. The coverage collection should also be executable, which allows an immediate and precise indication of the functionality the integrator has not tested yet.
Altogether, the socketization of the tool kit surrounding the core itself should consist of the following parts:
- Checkers: Interface checkers (mainly temporal) will catch in-tegration errors, while internal checkers (of the core's internals) can denote cases where there is a real bug in the core. This makes it easier to support the core in "strange" environments.
- Coverage scenarios: These define the different scenarios the core developer believes the integrator should test, particularly the protocols and important communication scenarios between the core and the system.
- Reports and debugging utilities: Besides allowin g the integrator to get more information about the internal state of the core during a test, these can also be used to supply the core developer a debugging aid when necessary. Note that these reports need not be "readable" to the integrator if there are concerns about the proprietary nature of the internals of the core.
The developer might also want to include with the core other parts of the testbench he developed. This can be useful to show that the core works correctly in a given controlled environment, and can also be used as a reference for the integrator, showing how the core should be stimulated. Additionally, integrators can reuse many of the same testbench modules in the development of their own logic.
Behavioral models and bus functional models (BFMs) are usually used in the development and verification of cores. The behavioral models can often be developed faster than the register-transfer-level code, which makes it possible to start system developmen t sooner. The behavioral models also make the simulations run faster. Since they are easier to control than the real logic, BFMs are usually used to drive the core interfaces.
These behavioral models and BFMs can serve the same purpose for the integrator as they do for the developer. When integrators are developing their logic, they are likely to start with a BFM as well (instead of the core). Rather than having the integrators create their own behavioral models and BFMs, guessing what the correct behavior is from a specification, these should be part of the core developer's deliverables.
It all begins with the intellectual-property developer verifying his core. By using a testbench automation tool like Specman Elite, the developer can make full use of the automated testing approach to generate tests for the core, and cover all its functionality without any required assumptions on the target environment.
The testbench automation tool will generate tests for the core and inject them into the core. The test generation can be done before the simulation begins or on the fly, during simulation. The on-the-fly approach allows the stimulus to be generated just before it is injected. This technique makes it possible for the next input to depend on the current state of the core, enabling the test to reach interesting corner cases.
In addition to generating and injecting the stimuli into the core, the testbench tool should check that the core behaves according to the specifications. The two main types of checks called for are data checks and temporal checks. The data check focuses on the contents of the core's output, while the temporal check focuses on the timing rules (protocols) that are part of the specifications.
The checks created by the core's developer, along with additional new checks focusing on the interface of the core, are made part of the verification tool kit that ships with the core. These checks will automatically fire when an integration rule is violated. When that happens, the simulation will stop immediately and the integrator will be able to debug his design. Each check in the tool kit will be accompanied with a comprehensive description of the problem and possible reasons. Also, the fact the simulation stops at the exact moment of violation is much more effective than discovering the problem only when bad data is seen at the outputs of the core, usually thousands of cycles after the violation.
Copyright © 2003 CMP Media, LLC | Privacy Statement