Libraries, memory IP raised from the dead By Ron Wilson, EE Times
March 15, 2004 (12:26 p.m. EST)
SAN MATEO, Calif. Announcements by Prolific Inc. last week and Artisan Components Inc. today appear to signal a sudden revival in third-party logic-cell libraries and memory intellectual property.
Indeed, industry insiders suggest that these once-moribund businesses may become pivotal in the era of growing cooperation between design and fabrication. With much of the outcome of a design, both in specifications and in yield, riding on decisions made inside the libraries and memory structures, the newly resuscitated library market is taking on a whole new significance.
The IC industry's sad experience at 130 nanometers, the volatility of design rule files and the rising complexity of recommended rules and yield calculations are driving chip design teams to take back control of their libraries. "We are seeing a lot of people now who feel they need to control their libraries," said Ewald Detjens, CTO of Circuit Semantics, which supplies modeling tools that are used in the Prolific flow. "The stability issues are certainly contributing to that."
The new offering announced by Prolific expands the library-generation-tool vendor's reach into custom library development for a wider range of design teams. Vice president of marketing Dan Nenni said the Newark, Calif., company's library-development flow turns a set of architectural requirements into a library of characterized cells in a more or less pushbutton process. Now, for design teams that don't wish to be involved in developing the architectural specifications necessary to drive the process, Prolific will offer predefined input files and will consult on optimizations to the team's specific needs. In an even more turnkey approach, Prolific will execute the flow itself, providing in effect a custom library.
Artisan, for its part, will announce today a completely new design for embedded-memory blocks. Because low operating voltages have rendered obsolete the 15-year-old SRAM array architectures used in much of the industry, Artisan redid virtually everything: the bit cell, banking architecture, floor planning, sense amp structure, bit line routing, layer selection and leakage control. The result, claims the Sunnyvale, Calif., company, is a memory capability for 90-nanometer design that beats the competition by 40 percent on speed and cuts power in half.
The renewed action in a sector once taken for dead reflects not so much an increase in design activity as a shift in the interface between fabless design teams and their foundries. The combination of freeware libraries on the Web and essentially free basic libraries provided by most foundries all but extinguished the sale of cell libraries for the 130-nm process node. Similarly, little attention had been lavished on embedded-memory IP.
But the dismal role assigned to library vendors in the 130-nm generation may have been a premature burial, from which the chip industry is now desperately trying to dig its way out. "A lot of people used foundry-provided or freeware libraries at 130 nm," Prolific's Nenni observed. "And frankly, the yields at 130 nm have been horrible. That experience has been a major factor in the renewed interest in independent library development." Indeed, "Early yields in any new process aren't great," said Artisan Components' president and CEO, Mark Templeton. "In fact, at 130 nm they were terrible."
It wasn't that the free libraries were wrong, Nenni said. The problem was the result of a development that first became apparent at 130 nm: the separation of design rule files into two sets.
"In previous processes, there had always been just one set of minimum design rules," Nenni explained. "You followed them, or you were wrong." But for the first time at 130 nm, average design teams were exposed to a second set of rules--the so-called "recommended" ones. The fine print said that if you followed the minimum rules, the design would work. If you followed the recommended rules in addition, it would yield. Most of the free libraries were designed to the minimum rules and did not include provisions to relax minimum spacings to improve yield.
The problem was complex, insiders say. To begin with, the 130-nm recommended rules arose as a result of process learning; it's not easy to infer design-for-yield guidelines from the process design before you run volume wafers. So necessarily, the recommended rules remained in flux after the minimum rules began to jell, and long after they would have been needed for design of the libraries. And often, the recommendations took away some of the benefits that had drawn early adopters to the process in the first place.
"Foundries compete on what's in the minimum-rules file," Artisan's Templeton said. "They can get a little overwrought at times." The result was a generation of 130-nm designs, based on just the minimum rules, that provided all the density and performance of which the process was capable, but at calamitous yields.
Observers say that savvy design teams learned from the 130-nm experience and don't intend to repeat the mistake. "For one thing, when we look at the minimum rules for a 90-nm process, we think about the equipment the foundry actually has in place," Templeton said. "Sometimes we decide that a particular rule just isn't a prudent risk, and we will use something more conservative."
But 90-nm processes are making the question more complex than ever. "Even the minimum rules on 90 nm are still in flux," said Virage Logic CEO Adam Kablanian. "And since there are so few designs in anything like volume production on 90 nm right now, they may continue to be in flux for several more years."
In addition, he said, "the recommended rules have become much more complex. Instead of being simple deterministic statements, they are becoming statistical. They define trade-offs, not guidelines. And we don't have automatic tools like rule checkers to work with that kind of information."
Moreover, the recommended rules can quietly take away all the advertised benefits of the new process. "If you follow all the recommended rules, you could end up with a design that is 30 or 40 percent larger than you started with," Prolific's Nenni said. Designers need some way of estimating the effect on area, power, performance and yield when they implement a particular subset of rules in a particular block a feature that Prolific's library generator can offer, at least at the library level.
In fact, the industry is probably in the midst of moving from rules-based design-for-yield to a model-based approach, University of California, San Diego, professor Andrew Kahng suggested at a recent conference. But that shift is a long way from complete.
"We haven't seen a third-party tool that will analyze the trade-offs in the recommended rule set for you," Virage's Kablanian said. "You have to use your judgment." Templeton agreed, but added a broader statement. "Even with vital issues like substrate noise, foundries often don't have the level of characterization you'd like especially if you are pushing performance in a low-voltage environment. Sometimes the only way to head off problems is to rely on your design experience."
By partnering with a library supplier instead of using default libraries, a design team can clearly understand when and how libraries will be updated to reflect changes in minimum rules. They can also work with the library supplier to understand the implications of the growing and evolving recommended-rules files, so that they will have an input into the yield trade-offs. Those abilities lie beyond most design teams in isolation, but they are becoming increasingly vital, according to many in the industry.
"IDMs and some of the largest fabless companies still have the level of silicon expertise necessary to make these decisions for themselves," Kablanian said. "But with the emphasis on cost control and outsourcing, it is becoming increasingly rare in fabless semi startups. And it is very rare in the systems companies who are doing their own SoC [system-on-chip] designs."