Pressure is on Third-Party Memory IP
by Krishna Balachandran, Senior Director, Product Marketing, Virage Logic Corp.
As chip content grows in complexity, a corresponding dramatic change is occurring in the semiconductor industry. Vertically focused IDMs, faced with enormous costs to build fabs in newer technologies like 90nm processes, are increasingly outsourcing design tools, semiconductor IP as well as manufacturing and test services in a push to maintain or achieve profitability.
This has led to a surge of fabless companies and the popularity of contract manufacturers. On the technology front, today's deep-submicron semiconductor technologies of 130nm and 90nm have reached susceptibility levels that put reliability and yield at risk. Moreover, the increasing time-to-market pressure has forced IDMs and fabless companies alike to start volume production on a given semiconductor technology before reaching the defect densities and yield levels traditionally targeted.
Silicon-proven third-party IP has emerged as a way for these companies to effectively meet market pressures by enabling them to focus on the portions of the SoC that constitute their core competencies. Companies are shedding what is not essential to differentiation. As long as there are reputable third-party IP solutions available that are qualified in silicon and meet performance requirements, companies can confidently outsource.
The increasingly complex nature of SoC devices demands a wide portfolio of readily available, silicon-proven, high-performance embedded-memory cores that internal memory IP teams have a difficult time building and maintaining. In a climate that rewards cost management and quick time-to-market and volume, companies are moving rapidly to an outsourcing model for IP.
According to the SIA, over half of a chip's surface today is embedded memory. Since this memory is a critical component of SoC designs, IDMs and fabless companies are taking advantage of embedded-memory IP. It enables them to aggressively decrease die size and enhance chip performance. In fact, more companies are using large amounts of embedded memory with high-performance, low-power architectures for applications such as:
Consumer appliances: Internet appliances and other consumer products increasingly require greater functionality, Internet connectivity and low-power consumption.
Computers: PCs, workstations, servers and other computation equipment require more complex chipsets and high-performance embedded memory to achieve new features like advanced 3D graphics.
Communications and Internet infrastructure: Communications SoCs are used throughout the Internet and are found in routers, switches, DSL modems and home networking.
For these applications, the insatiable demand for bandwidth is driving the need for increased memory capacity and faster memories. Wireless computing and communications also require storage of data prior to transmission or after receipt, placing increased demands on embedded memory.
When custom design techniques are used, embedded memories are more area-efficient than standalone memories, and more highly optimized for performance. Designing the application-specific memories used in SoCs requires special knowledge and expertise. Furthermore, the architecture of a specific memory must be redesigned and recharacterized for each new generation of process technology, as well as for each foundry - a time-consuming process.
As products shrink and become more lightweight, area is at a premium. Reducing silicon size not only makes products more portable, it also decreases cost. Embedding multiple memories on a single SoC reduces the amount of silicon used. At the same time, packaging, which accounts for 40 percent to 50 percent of the cost of an IC, is reduced.
Smaller dice also produce higher yields which translate into added savings. Since area is a crucial design factor, and because a significant portion of SoC design is populated by memory, it is important to ensure that embedded memories are optimized for area.
Impact on performance
Embedding faster, wider memories and moving them closer to the processor can increase system performance substantially. Without embedded memories, limiting factors include the package I/Os, the PCB and the available configurations (depth, width and number of ports) of standalone memories. Propagation delay through the I/Os can add several nanoseconds.
Driving PCB buses reliably beyond 100MHz is also a challenge. High-speed, standalone memories are not always available in the required configurations, and they are expensive. By using embedded memory that is application-specific, a designer can optimize the different configurations of memory and the interface to the system, and achieve up to two to three times the performance of a standalone memory.
In wireless or battery-powered applications, reducing power consumption to extend battery life is essential. Embedded memories eliminate the need to drive off-chip capacitance between the standalone memories and other system chips, thereby reducing the power consumption of the system.
Lower power consumption also means there is less heat to dissipate, and packaging requirements are reduced. With detailed and accurate power models (provided by the memory IP supplier) as well as memories designed for low-power consumption, designers can optimize memory configuration and system design for the lowest power possible.
Designers must have absolute confidence in the manufacturability of the embedded memory used in the design. This requires a design approach that considers testability and manufacturability from the ground up, together with a comprehensive silicon-validation program to ensure that memories meet specifications and produce high yield. Memories must be manufactured, tested and characterized across a wide range of temperatures and voltages before being integrated into products.
Furthermore, each memory needs to be optimized across multiple foundries to allow for an uninterrupted source of supply and to secure low prices. By purchasing memory IP that has already been extensively tested and characterized, IDMs and fabless vendors can shave months off the product cycle times with the assurance that the memory is silicon-proven and works at first tapeout.
While embedded MPU and DSP cores are essential in defining the system architecture, embedded memory is key to ensuring design manufacturability at cost-effective levels. With larger memories being integrated onto the SoC, the embedded memory--not the embedded MPU--determines the manufacturability and cost of the SoC, as well as how quickly time-to-volume is achieved.
Memory is designed with aggressive design rules and typically has twice the wafer defect density of logic. In advanced process geometries such as 130nm and 90nm, the memory defect densities can even be higher. As memories become larger and occupy more of the surface area of the chips, they become the dominant factor in determining overall die yields and directly impact silicon die cost. Built-in self-test, built-in self-diagnostics and built-in self-repair substantially improve quality and yield.
Redundancy is a proven repair technique used to combat memory defects and improve overall yields. Redundancy provides for additional bits to be designed into the memory to replace defective bits, which helps increase yield and reduces cost.
Commercial embedded-memory suppliers now have the breadth of product line and expertise to ensure both IDM and fabless companies that outsourcing memory IP not only saves time and money, but also results in a higher-quality product. For more companies to consider outsourced memory IP, embedded memory must provide added value over standalone memory solutions. Embedded memory does so without requiring the designer to be a memory expert. With a complete set of validated EDA views, integration into the design-reuse methodology is assured.
The result is that these factors have created a market for third-party providers of highly reliable, high-performance embedded-memory IP for SoC designs.