Navigating the Reef: Supplying IP That Works For You and Your Customer
by Steve Deeley, Zarlink Semiconductor
Life is tough for IP suppliers. The IP market value has passed the $1 billion mark in annual revenues, but a recent analyst survey showed just 10 companies dominate 78% of the market, leaving over 130 other suppliers averaging less than a paltry $2 million each in sales.*
To make matters more challenging, consumer attitudes are hardening. Faced with rising manufacturing costs, IP customers are paying closer attention to the financial impact of errors and support delays. To win business, suppliers must demonstrate a track record of right-first-time integration and plentiful resource support, or offer a product so compelling that customers can justify integration risks.
In a fiercely competitive market, how can IP providers maximize their chance for success? Its not always appreciated that designing functions for external reuse requires an engineering standard above what is required to simply make a function work. Along with a carefully considered business model and resourcing that can survive a long return-on-investment cycle, best-in-class engineering practices and a design for debug approach often means the difference between success and failure. Design for Debug
While most companies use CAD tools and manufacturing processes from a relatively small number of providers, the integration environment of the IP customer and supplier is rarely identical.
Whether its a different release version of the main simulator, or a preference for a different on-chip bus, there will normally be subtle differences between design and integration environments. These minor differences may lead to significant discrepancies in perceived performance; to the point where known good IP fails in a new application. Types of IP error
IP errors are classified into four basic categories: Structural, Contextual, Procedural and Assumptive. While structural errors can occur anywhere, the other three are primarily associated with the integration of data into a new environment. As such, they are a common cause of inexplicable failures in previously good IP.1. Structural Errors
Structural errors arise from the structure or content of your RTL or software code. The classic example is the missing semicolon on the end of a line. This type of error can be readily addressed by using code checking, coverage and linting tools. They typically occur where a vendor hasnt rigorously checked sections of code not used in their in-house applications, or where regression tests have not been used to confirm the impact of code changes.
To protect against structural errors, use a house coding style. In many cases it doesnt matter what rules you choose, so long as your implementation is consistent. Standardized coding will make debug and the creation of rulesets for automated checking tools considerably easier. As we will see, they also make it easier to address sources of other error types.2. Contextual Errors
Contextual errors result directly from the environment in which code is used or released. Clashes of reserved names and environment variables are common examples, as are softlinks that dont resolve in the new environment. Perhaps your customer supports only VHDL87 while you use VHDL93, or your four metal layer design just got raised to six metal layers by integration with other data.
To avoid contextual errors and make debug easier, keep things simple. Use only basic language constructs in any code, and build and test your product releases on a straight-out-of-the box machine with a no-frills software installation. Ensure that you arent inheriting setups from .cshrc or .ini files. If your IP installs and runs correctly on this test machine, then you know it is independent of the software environment. While it wont necessarily prevent all issues, this simple environment makes it easier to localise the source of any error.3. Procedural Errors
These are an IP providers nightmare. Procedural errors depend not only on the context in which your data finds itself, but also the applied tools and procedures, and performed order of operations. Data translations between tools are a common cause, which is why it is best to minimize the intermediate views you require. Examples here would be data packed with one version of tar and unpacked with different options, or module names being truncated by tools with a name length limit.
Procedural errors are often the cause of the resource-intense works for me disputes between supplier and customer. They may not be errors at all, merely discrepancies with no functional effect, but your customer cant be sure until the bugs have been found and examined. Because they can be sequence-dependent, procedural errors are impossible to predict and extraordinarily difficult to debug.
The key to managing procedural errors is to design for debug, and control your data closely to quickly narrow the location of the error. Regression testing is vital, as is a house coding style along with configuration control (versus just version control). Limit the number of simultaneous changes you make to your code between regression test runs, and never assume that module changes only impact at the module level. Pay close attention to patches always maintain a patch history and be very clear which are cumulative. Ideally, all your patches should be cumulative, so you arent left considering what happens when your customer applies A and C, but forgets B.4. Errors of Assumption
Arguably this fourth category of errors fits into the other three, but are worth looking at separately because they are equally avoidable and unpredictable.
Inevitably, engineers make assumptions based on their own work environment. Often your team is unaware they are building those assumptions into their product, or that your customers default assumptions may differ from yours. A simple example is the assumed endianness of a bus. Do your documents always specify which way around they are? What happens if your customer works the other way around?
Avoiding assumptive errors is difficult but not impossible. A house style guide will outline things clearly, and giving a copy to your customer (and getting theirs in return) will help map any differences between approaches. Engineers will often uncover assumptions when documenting products carefully.
The strongest weapon against assumptive errors is the external audit, but it must be handled sensitively. At best, an audit is like a university exam. At worst, staff can feel that their jobs are at risk. However, whether done by engineers from another part of the company or consultants, an external review will bring a fresh perspective and often uncover hidden assumptions.
More than just debug
Even well-engineered IP can unmask errors when placed in a new environment. Design for debug is a good start, but IP also has to be structured and delivered so that it:
- Delivers the expected information
- Is permanently and consistently defined
- Is clearly and unambiguously described
- Is version controlled
- Is well documented
- Is traceable / has an audit trail
While these rules may seem obvious, they arent always achieved. For example, consider the deliverable containing your top level of code. What are you going to call it? Top or main are obvious choices, but these terms almost certainly exist somewhere in your customers design, causing problems when the IP is integrated. Make your name too long, and some tools will truncate it again, potentially causing a clash.
Its also a natural assumption that a named deliverable should always have the same, consistent functionality. But what if that functionality is achieved in a different way for different products?
For example, assume you have two CPU products, Fred and Bill. Each has a watchdog timer, in a module called Watchdog. But Freds watchdog is 16-bit, while Bills is 24-bit. What happens if your customer puts both CPUs down on the same chip, or you install the CPUs in a common directory used by different projects? Equally, dont name deliverables after internal release names and similar schemes. Getting your data in deliverables titled MPX-2217-MNP-00112-B isnt helpful when the project disk crashes and you need to restore data quickly.
Deliverables and patches that are common to multiple products can also cause difficulties. Consider Fred and Bill again:
- Fred 1.0 uses a deliverable called Testbench at version 1.0 for verification.
- Now we introduce Bill 1.0. We update Testbench to version 2.0 and make it support both Fred 1.0 and Bill 1.0.
- Now we bring out Bill 2.0. Its backwards compatible with Bill but not Fred, so we also introduce Testbench version 3.0, which only supports Bill 2.0.
This simple situation occurs frequently with companies supporting multiple IP products. But the impact is significant:
- We now have six distinct, separate product groups to support (Fred1/testbench1; fred1/testbench2; bill1/testbench 2; bill1/testbench3; bill2/testbench2; bill2/testbench3). Each may react differently and require separate regression testing.
- Users of Fred now have to know that Testbenches rev1 and rev2 apply to their product, but not testbench3. If testbench2 needs a fix for Fred, what number do we give it?
- New Bill users start with a Testbench deliverable at revision2. What happened to revision1?
- If a customer has both Bill and Fred, and the two deliverables unpack into a common file area, what happens to Freds testbenches when the new version of Bill arrives?
When considering deliverable and patch naming and organisation, you should also consider their scope. If your customers project only needs a fix to one particular bug, sending a cumulative patch including fixes for other issues adds risk to their design and may force them to run otherwise unnecessary regression tests on code.
Similarly, a customer needing a one-line change to their testbench may not be impressed to receive a data release that includes four gigabytes of unwanted layout data. The size and scope of each deliverable and patch should be a balance between design coverage, the need for regression testing, and providing the data the customer actually needs.
Finally, you must choose a directory structure that suits integration. It is tempting to adopt the structure used by the IP design team. However, an ill-considered structure can be confusing or result in failure of the data in some tools. A directory structure that is narrow and deep leads to very long path names that may be truncated without warning or rejected. By contrast, wide and shallow data structures promote an increased reliance on softlinks that may not be properly maintained.
To strike the right balance, consider segregating data into technology-specific and technology-independent types. That way, views supporting a new process can be introduced without reissuing the whole structure. The second principle that supports reliable installation is to classify data hierarchically:
a) By technology
b) By CAD language (verilog / VHDL
c) By CAD tool (e.g. Verilog-xl, VCS, NCSIM)
d) By version
e) Data type
While it may be overkill for many applications, the best approach allows quick comparisons for verification. For example, users should be able to easily look across parallel directories to confirm that all the tools have produced the same results from the same source. Summary
Designing and delivering products for reuse requires methods and procedures that go beyond those of just good design. To reap the rewards of todays challenging IP marketplace, suppliers must be prepared to meet these exceptional demands, and resource accordingly. Its no longer a question of simple reputation: its now a question of survival.Steve Deeley is Director of IP with Zarlink Semiconductor
* source: Gartner Dataquest