| It was probably no surprise to anyone when the idea of reuse first started to appear in the development of chips. At first the industry turned to salvaging, or the reuse of blocks that were never intended to be reused. Later the ability to buy pre-designed blocks of functionality from third parties, which could be hooked together by the system designer, enabled huge chips to be put together in a fairly short amount of time. |
New tools being introduced today make that even easier and quicker to manage. The Virtual Socket Interface Alliance (VSIA) was formed to help the industry tackle many of the technical, managerial and legal issues surrounding the silicon intellectual property (IP) industry, and its success has enabled the industry to grow faster than it would have been able to if there had been no unification in these areas.
But the IP market still has a bad name. Some still question the value of reuse, but in most cases the target of this criticism is wrongly placed. Designers creating IP blocks are not second rate engineers using bad methodologies or sloppy verification techniques. In fact, in some cases, they are using the most advanced and thorough techniques available. So what is the fundamental problem?
Design is getting more efficient
Industry statistics currently show that approximately 70% of the design time is being spent in verification. Whether the actual number is right or wrong is not clear, but it is clear that verification is taking a larger portion of the total development time and is growing rapidly.
This should come as no surprise. Many tools have been developed which increase the productivity of the design process. Synthesis and reuse, notwithstanding the issues raised here, are the two biggest contributors. Even with an increase in complexity, the time spent in design has been declining. As IP blocks get larger, that total time will continue to decline.
But the same cannot be said for the verification side. While recently there have been some productivity improvements with the introduction of pseudo random generation and coverage based techniques, the productivity increase has been much smaller than for design. So even if design and verification efforts started off equal, the balance will continue to shift to a greater percentage of the available time being spent on verification. Something has to happen to stop this and bring both tasks back into balance.
IP needs to be verified
When blocks were salvaged from previous designs and modified for the current design, there was basically no reuse on the verification side. It was unlikely that even if the testbenches for the original block existed or could be found, that they would be reusable for the modified block.
In the early days of the third party IP industry, changes and customizations were not uncommon. This again meant that verification of the blocks had to be performed from scratch. As the industry has matured, changes are now the exception rather than rule, but there is a still a lack of trust between the vendor and user when it comes to knowing if the blocks will work correctly.
These problems stem from a multitude of issues, such as the IP provider having to make sure that the block works with a multitude of EDA tool flows, in any number of silicon technologies, in any combination with other blocks, or is being used in ways that were not considered by the original developers. Complete verification with all of these variables may take longer than the viable market window for the IP block. So just when the IP block is becoming stable, nobody wants it any more as it has been replaced by newer, faster, better blocks.
The compound problem
This means the industry faces a double problem. Verification efficiency is slipping compared to design, and reuse currently requires re-verification for all IP blocks. Today there is no clear answer to either of these problems, but solutions are necessary for the continued health of the whole industry.
To start with, it is necessary to partition the verification problem into multiple buckets rather than the "do it all at once" approach employed today. It was a similar separation that enabled most companies to stop performing gate level simulation. The issues of clock level timing and functionality were separated from implementation functionality and sub-clock timing. The latter was checked with static timing verification, the former by simulation and the functionality between the two domains checked with formal equivalence checking.
In the figure below a highly simplified design flow is given with the verification step that needs to be performed after each of those steps. I am not aware of any company today that performs all of these steps at each stage, but almost none of them can be omitted completely. When any portion of the verification is performed at a different stage in the development process, inefficiencies are introduced.
Figure 1 Required verification steps
For example, functional verification can and should be performed as soon as high level models have been developed at the system level. If such models are not developed, then it is likely that functional verification will be performed on the implementation models.
While this is possible, it is very inefficient as there is more detail than needed in the models to perform this task. As a result, simulator performance will be slower. In addition, the identification of functional errors is delayed, making them more expensive to fix.
This leads us to part of the solution that's necessary, which is that models of the right abstraction must be provided along with the IP blocks. It is no longer good enough to just provide an RTL implementation model. Higher level functional models should be provided and made available with much fewer legal constraints so that they can be tried before the IP is bought.
First steps to fix the problems
The incorporation of any block into a design requires planning at many stages. One of the most important plans is the verification plan. This outlines what you are going to verify and how. It identifies the important pieces of functionality that must be present for the design to succeed.
Once this has been constructed, relevant parts of it should be shared with your IP providers. This gives them an insight into how you intend to use the block, which may signal pending problems to them.
Another way to ascertain this is to run their testbench against your verification plan. Are there holes that need to be filled, again possibly indicating functionality that you intend to use that has not been verified? Of course, if the IP vendor cannot provide you with a testbench, you should probably look elsewhere for your IP.
Make sure you identify the objectives of verification and apply the right levels of abstraction to it. Very few designs require detailed timing in order to ensure they are functionally correct, and in the places where this is true it can often be overcome by careful modeling. Use RTL models only for verifying the implementation and try using these models in the context of the system models as well. This can identify problems with system integration, long before that phase would normally start.
Pay as much attention to verification reuse as you do to design reuse. A fair amount of third party verification IP is available today and should always be a priority when developing testbenches internally. Reuse here will be more beneficial than on the design side.
There is not, or should not be, a brick wall between you and your IP provider. It is a partnership that you are entering into. As such you need to do some work to understand the flows that they use for the development and verification of the IP.
Where there are mismatches with your flow, it should prepare you to expect problems in those areas. If those mismatches are too large, or the IP has never been implemented in a technology similar to the one that you intend to use, you must either expect to have a certain level of problem, or look to a different vendor.
While I have no financial reason to favor the large IP providers, you probably do. Minimizing the total number of vendors not only reduces the total effort for legal and licensing, but also means more consistency in the way in which the IP has been packaged and what you can expect to get from them.
In addition, if the blocks of IP are related, it is more likely that they have been verified to work together. This extends all of the way to obtaining a complete platform as IP and then only adding the unique content that would set your design apart from your competitors.
Interfaces form the boundaries between the major components of a system, and are the places where many problems hide, often undiscovered until system integration occurs. Reducing the number of unique interfaces can help by reducing the total knowledge required, maximizing the value from available verification IP and making the system more modular. Consider the case with the Synopsys Designware library, where you can get any number of different processors, but they all come with an AMBA bus interface.
The old adage of "keep it simple stupid" (KISS) holds true for the IP industry as well. While we often think of silicon area as being free, overly complex IP blocks not only consume valuable power, but usually result in lower quality levels. This is because the likelihood that the block has been verified in the configuration you want to use goes down as the total number of possible configurations goes up.
I have seen and heard of many cases where an IP vendor claims a certain level of functional coverage, or that the core has been implemented in silicon by X number of customers, and yet the first test that a new customer runs results in failure. Assume that if your intended configuration has not been verified, then it does not work. This may mean that a simpler block provides a better value if you do not intend to use all of the functionality of a more flexible block.
Reuse without re-verification is still a concept that is perhaps off in the future, but there are many things that can be done to minimize the impact on the total time it takes to re-verify IP. With some IP vendors, particularly in the space of processors, the industry is becoming comfortable with the usage of higher abstraction models for functional verification.
This is a good start but the rest of the industry must learn how to achieve this same level of trust. Only then can we expect to see a net reduction in the time spent on verification and allow more of the total design effort to be spent on creativity.
Brian Bailey is an independent consultant helping companies improve their verification efficiency. He can be reached at email@example.com.