System-on-chip designers are always looking for ways to improve the efficiencies of their creations, striving to reduce costs, decrease design time and shrink the overall size of their SoC designs. But one often overlooked area that can be dramatically improved is the actual management of the operational flow of the design.
An optimized automatic flow-management system is crucial to the future use of intellectual property (IP) in SoC designs. Because of increased complexity of the SoC designs themselves, it is no longer feasible to entrust the management of the design flow to antiquated technologies such as scripts and makefiles. The pressure to maintain a competitive edge and the urgency created by shrinking market windows makes the use of automated design-flow management a necessity.
Most think of the flow only at a very high le vel-which tools are we going to use, how do we get to timing convergence, how do we interface synthesis and layout and so on. But if you delve deeper, within each block in a high-level flow there is another flow, much more complex, resulting from the operational interaction between the thousands of files and jobs that are actually required to perform each of the tasks in SoC design.
One of the characteristics of an SoC design is that it encompasses multiple design disciplines; analog, digital and software components-proven as well as new-and experimental methodologies. With SoC, all of these once disparate components and methodologies interact with one another.
A change to some files may require corresponding changes to other files associated with the software. The lack of a complete representation of the operational relationships between the files (that is, the flow) leaves open the possibility that some changes may not propagate to all dependent files. This can lead to degradation of the design quality and to wasteful executions of expensive EDA tools with obsolete data. Understanding and optimizing the operational flow and managing the process is going to be the key to successful SoC design and will enable even greater use of IP.
How complex are SoC designs from the point of view of the operational flow? We can establish some lower bounds by considering partial designs for which we have statistics. On the IP provider side of the fence, a typical flow for library characterization and QA for a library with 400 cells for three PVT (process, voltage, temperature) corners calls for around 150,000 files and 40,000 distinct tool invocations, for a total of a few weeks of CPU time. In another class of subprojects involving test vector generation, there can be anywhere from 10,000 to 100,000 files and tens of thousands of jobs.
With hundreds of thousands and perhaps millions of files in a design, who can confidently say that all files are up to date and that all verification tasks h ave been performed successfully? That kind of data will be available only from an automatic system that understands the operational flow and the interdependencies among the files and the tools.
Now what happens to that design if a designer decides, using VHDL, that he or she wants an extra inverter? That change will have to propagate through synthesis, to the gate-level netlist, to layout, to verification. If the change is done manually, as is the case with most designs today, the design is more vulnerable to mistakes, which in turn is more expensive and wastes time. By understanding the flow and interdependencies the designer can efficiently and correctly propagate the change automatically only to the part of the design that depends on the changed file, thus saving time by avoiding wasteful execution of tools.
At Runtime Design Automation we are working with customers who are implementing a technique for automatic flow management we call runtime tracing. Essen tially, it allows the design team to manage the thousands of files necessary to execute a large hardware or software project. For SoC design, runtime tracing is especially useful because it provides users with a complete graph of all the dependencies among the files in the design and supports automatic documentation of the operational flow, network computing and intelligent propagation of changes.
Wherever we have applied our technology, we have realized a substantial reduction in execution times vs. the traditional methods, while giving the designer a sense of powerful control over even the most complex flow. In the case of library characterization and QA, we have consistently reduced the time from two weeks to one or two days, depending on the availability of computing resources-that is, CPUs and software licenses.
Until now, flow management has basically been overlooked. Before SoC design, manual techniques were adequate. However, with the current trend toward larger and more complex desig ns and the increased use of IP, design flow management is one area where designers not only can increase efficiencies but reduce uncertainty about the design itself. When getting to market first with the best product is necessary, designers need all the help they can get to ease the burden of complex SoC design.