The five facets of SoC design complexity
The five facets of SoC design complexity
The five facets of SoC design complexity
By Pete Hardee, Director of Product Marketing, CoWare, Inc., San Jose, Calif., firstname.lastname@example.org, EE Times
December 19, 2002 (10:54 a.m. EST)
"Digital convergence" is creating demand for functionally complex ICs in six-nine month design cycles at mass-market costs. On the one hand, increased capital investments in the late 1990s have seen worldwide IC manufacturing capabilities growing by over 50 percent per year. On the other hand, as the electronics industry shifts from ASICs to SoCs, design productivity has struggled to grow at 20 percent per year, and the gap is widening. The industry slowdown, one of the worst recessions in the history of the semiconductor industry, is only serving to accelerate the inflexion point as ASICs for struggling PC, server and industrial broadband markets lose out to SoCs or "Systems-on-a-few Chips." These chips power next-generation consumer-oriented hand-held computing, multimedia and Internet-enabled communications products based on digital convergence.
As a result, it has never been more important to efficiently manufacture high volume product s, while meeting tight time-to-market deadlines, while retaining flexibility to morph products quickly to meet the changing needs of increasingly fickle consumers. Further compounding the situation: Most existing design methodologies are struggling to cope with today's challenges and are in need of an overhaul.
Today's SoC-based designs present a design team with five facets of complexity that range from functional complexity, architectural and verification challenges to design team communication and deep submicron (DSM) implementation.
Facet one is functional content complexity and the sheer amount of functional blocks contained in a typical system. This breadth of content leads to the wide acceptance that designing these systems from scratch is far beyond the design productivity or capabilities of even the largest design teams. Some form of intellectual property (IP) reuse has become an inevitable part of SoC design.
Much of the functionality in a SoC-based system is implem ented as software, and the dual needs for increased productivity and reuse exist for software as well as for hardware. Along with increased IP reuse, this leads to a need to design at higher levels of abstraction and to move from written specifications to "executable" specifications, often in the C or C++ languages.
The second facet is the architectural challenge. When functional complexity of this scale is implemented in a short timeframe, there must be a detailed architecture set from the start of the project and rigorously adhered to throughout implementation.
Gone are the days when various blocks of functionality could be distributed among design team members who had relatively complete autonomy for the architecture of that block. Of course, there has always been some unifying architectural guidelines -processor speed, bus width for necessary throughput and the likely amount of memory, for example. This information is typically based on guru knowledge and experience of a small number of senior system architects.
These methods are under pressure because of increased design complexity and from the necessity to get the architecture right the first time. It is increasingly unacceptable, in terms of both time and money, to change the architecture mid-way, and equally unacceptable to "over-engineer" from a unit cost perspective.
These first two facets combine together to make the System Level Design (SLD) portion of the SoC design process.
The verification challenge is the third facet and stems from the first it is impossible to create broad functional content and increasingly difficult to integrate and verify it all. The verification challenge is believed to scale with somewhere between the square and the cube of the complexity of the block being verified.
This problem is exacerbated by two factors. IP reuse does not solve the problem since IP must still be verified within the system context. And, hardware-software integration is left unacceptab ly late in the design cycle and becomes prohibitively difficult since there is often no good prototype hardware available until the SoC is fabricated. This is leading design teams to seek out many different kinds of verification improvements.
Where integration begins
One is the trend to perform system integration on a "virtual" prototype, which uses a simulation model to bring hardware and software together at the earliest stage possible. The challenge is to define just where software development stops and system integration begins. In practice, this is a large gray area that encompasses much of the firmware development, particularly the development of the Hardware Abstraction Layer (HAL), the hardware-dependent software layer that "abstracts" the application from its hardware "target."
This leads to the fourth facet, and perhaps the biggest obstacle: the complexity of design team interactions and communications necessary to successfully undertake a SoC-based design. To achi eve improvement in the first three facets, there needs to be interaction between system architects, algorithm designers, software developers, hardware designers, system integrators and verification specialists. The reality is that this is a rarity these organizational functions are rarely integrated together. More often, there are significant organizational, and sometimes physical, separation between these functions and real cooperation is the exception, not the norm.
| CoWare advocates a top-down design or platform-based design. The functional system is captured as a series of blocks that, once connected, can be simulated, allowing the functional behavior of the system to be verified, debugged and analyzed. Partitioning can take place by designating which blocks in the executab le spec will run as software on the target processor, which will be hardware, and which form the testbench. |
The fifth facet is the set of issues that come with implementing a complex chip design in a DSM process technology. These include problems of timing closure, placement and routing, including avoidance of increasingly problematic physical effects such as crosstalk. While important, these issues are not the focus of the proposed design methodology. Instead, they are covered by electronic design automation (EDA) companies whose products are focused on the RTL to GDSII flow, which is fed by the SLD section of the SoC Design Process.
Engineers at CoWare are advocating a design methodology that is flexible and expressive, and represents the diverse nature of system-level design for electronic products. It would be used by system architects, algorithm designers, software developers, hardware designers, system integrators and verification specialists alike and should be viewed as a single design methodology with multiple entry and exit points. It would offer them the ability to include IP or other design elements from various sources. The main flow variants should be top-down design or platform-based design, which involves a large degree of bottom-up design, at least in the platform creation stage.
In practice, real design flows are invariably a combination -- neither purely top-down nor bottom-up. The combination depends largely on the amount of available legacy design in existence.
The design methodology is most easily understood by considering a top-down approach. Steps in the methodology involve specification, architecture and implementation, with different levels of verification proceeding in parallel.
The functional system is captured as a series of blocks that, once connected together, can be simulated allowing the functional behavior of the system to be verified, debugged and analyzed. Once completed, pa rtitioning can take place, by designating which blocks in the executable specification will run as software on the target processor, which will be hardware, and which form the external system model or testbench. Such a design flow will automatically generate all the necessary glue logic and software drivers to implement the architecture implied by the chosen hardware-software partition.
Both the software and hardware blocks can be further refined for implementation, and can be co-simulated at every stage of refinement. A full range of debug and analysis tools appropriate for system architects, hardware designers and software developers should be available throughout. Consideration should be made for analysis tools that can be provided for exploring architectural issues like the hardware-software partition, processor choice, bus architecture and so forth.
Following refinement, software can be exported as a complete program, compiled directly onto the target processor and verified in the s ystem. Hardware could be exported as synthesizable RTL in hardware description languages VHDL or Verilog.
A design methodology that considers both hardware and software should be flexible, with many different paths and entry/exit points possible, allowing a unified and consistent design approach between systems, hardware and software developers. It should include two main flow variants, either top-down design or platform-based design.
The move toward implementing a flexible and expressive design methodology is a direction electronics companies should consider to beat today's SoC design challenges.
Copyright © 2003 CMP Media, LLC | Privacy Statement