Scalable, On-Die Voltage Regulation for High Current Applications
Opto-electronics -> Passive filters upgrade jitter testing
Architectural synthesis provides flexibilty in optical network design
By Marco Rubinstein, Director of Applications,Get2ChipSan Jose, Calif., EE Times
February 14, 2002 (3:52 p.m. EST)
URL: http://www.eetimes.com/story/OEG20020214S0043
Circuit design for high-speed networks is becoming more complex as companies compete to deliver hardware that can deal with the increasing volumes of data generated by rising Internet usage. Many are relying increasingly on parallelization, the technique of overlapping operations by moving data or instructions into a conceptual pipe with all stages of the pipe processing simultaneously. Execution of one instruction while the next is being decoded is a must for applications addressing the volume and speed needed for high-bandwidth Internet connectivity, typified by optical networking schemes such as dense wave-division multiplexing that allow each fiber to transmit multiple data streams. The proliferation of optical fibers has given Internet pipes such tremendous capacity that the bottlenecks will be at the (electrically based) routing nodes for quite some time. To build circuits that satisfy the need for more powerful processing no des a new design methodology, based on architectural synthesis, is being used. This powerful methodology offsets the difficulties designers employing register-transfer-level (RTL) synthesis methodologies encounter in these designs. Architectural synthesis generates timing-accurate, gate-level netlists from a higher abstraction level than RTL. These tools read in a functional design description where the microarchitecture doesn't need to be undefined. It is a description of functionality and interface behavior only, not of the detailed design implementation. The description contains no microarchitecture details like finite state machines, multiplexers or even registers. At this higher level of abstraction, the amount of code required to describe a given design can be one order of magnitude smaller than that needed to describe the same design in RTL. Hence, writing architectural code is easier and faster than describing the same functionality in RTL code, and simulating architectural code is quicker and simpler to debug.
An architectural-synthesis tool implements the microarchitecture of the design based on top-level area and clock constraints and on the target technology process, and continues the implementation toward the generation of a timing-accurate, gate-level netlist. During the synthesis process, the tool takes into account the timing specifications of all the design elements, including the interconnect delays. In addition, the tool performs multiple iterations between the generation of the RTL representation and that of the gate-level netlist, adjusting the microarchitecture to achieve the timing goals with minimum area and power. By changing the design constraints or by selecting a different technology process, an architectural-synthesis tool generates a different architecture.
Architectural design techniques offer multiple advantages in the fiber-optic hardware space, in which high-capacity, multistandard networks carry time-division multiplexed traffic, ATM cells, Internet Protocol and Ethernet packets, frame relay and some proprietary traffic types. Most of these protocols are well-defined, predictable sequences of data and architectural synthesis excels when such predictability exists.
The main difference between RTL and architectural design is that RTL is more low-level and the designer cannot take advantage of these sequences in a natural way. It is much easier to describe these sequences in architectural code and involves far less time and effort than creating an RTL description.
In architectural design, a 53-byte ATM cell could be written almost like a C program. It would read:
wait for a packet to come
when a packet comes, read the first byte
depending on what the header says, read the second byte, and that's the rest of the header
then read the third
read the fourth
read the fifth.
The designer simply has to write this procedure, line after line after line, following its natural sequential order. There is no need to create the state machine, because the architectural-synthesis tool automatically infers and implements it during the synthesis process.
Architectural designs are not only easier to implement, they are also simpler to debug. Architectural descriptions are easier to understand and usually simulate much faster. And very important in this context, since many networking "standards" are still in flux, designing architecturally offers flexibility. For example, the state machines are generated automatically by the architectural synthesis, eliminating custom-crafting of intricate state machines.
And if a standards body suddenly requires that a header grow from 2 to 3 bytes, another line of code can easily be inserted that says, "now after that second byte, I'm not going to read the body, I'm going to read another byte." The architectural-synthesis tool automatically g enerates the new state machine in which that change is implemented. Architectural intellectual property thus becomes much more flexible.
In an effort to address the data volumes, many networking companies are designing extremely large ICs, often containing multiple instances of the same subdesigns perhaps 24 Ethernet ports, or five OC-192 ports or similar redundancies.
Since these chips are massive, what is required is a synthesis tool with a high capacity and fast run-times, and one capable of producing the best possible timing all things that characterize architectural synthesis. The methodology guarantees greater capacity than RTL tools, faster run-times and higher clock frequencies.
Today's networking-hardware designers face intense competitive pressures. They need to build larger designs that perform faster than previous generations, in much shorter time frames and at a low cost. The need to reduce system cost and increase product performance can only be met by adop ting a new design methodology that raises the level of design abstraction without compromising the quality of results.
Related Articles
- Opto-electronics -> Merging materials key to SoC optical success
- Opto-electronics -> Hybrid integration: Production ready
- Opto-electronics -> Quantum wells integrate optical devices
- Opto-electronics -> Monolithic integration requires clever process, packaging schemes
- Opto-electronics -> Architectural synthesis provides flexibilty in optical network design
New Articles
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
- Timing Optimization Technique Using Useful Skew in 5nm Technology Node
- Streamlining SoC Design with IDS-Integrate™
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
- UPF Constraint coding for SoC - A Case Study
- I2C Interface Timing Specifications and Constraints
E-mail This Article | Printer-Friendly Page |