High-level languages simplify design
High-level languages simplify design
By Don Davis, Manager, High-Level Language Synthesis, Ian Miller, Senior Hardware Development Engineer, Xilinx Inc., San Jose, Calif., EE Times
November 20, 2000 (12:41 p.m. EST)
EDA vendors are exploring the use of high-level languages (HLLs) to provide greater levels of design abstraction to handle the growing complexity of today's multimillion-gate systems. And silicon vendors are exploring the concept of programmable platform system-on-chip (SoC) devices that simplify design through reuse while providing enough flexibility and performance to meet customers' needs.
However, effective tools are the key component for either of those approaches; high-level languages must be compiled into efficient software and hardware implementations, while a programmable SoC requires partitioning and implementation tools that understand the constraints imposed by the platform architecture.
What follows is an outline of a prototype design methodology for a new class of programmable SoC platforms based on a PowerPC embedded processor set in a programmable Virtex FPGA fabric. The methodology is based upon the Xilinx Forge com pilation technology that can analyze an HLL specification and produce highly optimized FPGA implementations. It includes the following steps: Application specification in an HLL; verification, simulation, profiling and characterization; partitioning; and mapping and compilation.
While the overall flow of the methodology is from specification to compilation, there are places in the flow where iteration is necessary or desirable. One feature of this methodology is a focus on short iteration paths to converge on an optimal design solution.
For example, there's typically a significant number of iterations between application specification and verification while the designer focuses on getting the functionality correct. However, since the application specification is captured in an HLL, hardware description language (HDL) simulators are not necessary-one simply runs the program. Iterations can be ac complished in minutes as opposed to the hours necessary for HDL simulations.
The entry point for this methodology is application specification in an HLL. The developer uses an object-oriented language such as C++ or Java to capture the functionality required for the application. At this stage in the process, implementation issues need not be considered. If available, blocks of intellectual property can be leveraged by the designer. If these are also specified in an HLL, functional verification is simplified. If not, the designer must decide whether to use a mixed verification environment.
Because verification environments are beyond the scope of this article, we assume that the designer has an acceptable one for his application. In addition to functional verification, the designer also needs tools to help with partitioning and mapping. These involve profiling and characterization of the application to determine the performance and resource use of various functional blocks in the specification . These metrics must be generated with both software and hardware implementations in mind. For hardware in particular, there are many factors to consider during partitioning and mapping. Among them are resource utilization (area, number of gates or number of slices), performance (maximum clock rate, throughput), power consumption and implementation architecture (pipelining, resource sharing).
Those metrics may also interact with the partitioning step. Current system design methodologies generally rely on a designer making a priori decisions about which elements of the design are best-suited to each resource. This is done at a very coarse-grained block-diagram level. An integrated application specification for the entire system chip is not used or if it is, it must be recoded once its target resource is identified in a capture language appropriate for that resource.
If the resource is logical gates, as in an ASIC or FPGA, the capture language is Verilog or VHDL. If the resource is a processor, then the capture language would be C, C++ or Java. Because of the time and effort necessary to "translate" the specification from one language to another, repartitioning of the design is generally avoided, and an iterative search of the solution space is virtually impossible. When capturing a design in an HLL, which is capable of targeting any of the resources in the SoC architecture, the compiler can assist in the partitioning decisions. But an HLL compiler must be able to quantify the performance characteristics of each portion of the specification as applied to each available resource.
The first step in quantifying an HLL specification is to divide the specification into manageable blocks. Each block should contain a sequence of instructions that operate on a given set of data to produce a result or set of results. In many HLLs this block may be represented by objects, functions or methods, or may be as localized as the contents of a looping structure. Once the specification has been di vided into blocks, the compiler can analyze each block in terms of data dependencies and control flow through the block.
The goal here is to identify those blocks that will most significantly benefit from the increased parallelism available in an FPGA or ASIC. With knowledge of the targeted architecture, a metric can easily be derived for each block indicating the expected throughput of the circuit in each of the resources. In our case, the throughput of the block on the PowerPC core is based on the number of Mips, and the throughput of the FPGA fabric is based on expected cycle speed and latency of the circuit.
The HLL compiler can now begin allocating blocks among the available resources until one or more of the resources has been filled to capacity. The area of each block in terms of gates and routing resources measures the FPGA fabric's capacity. The available memory and real-time performance constraints limit the PowerPC's capacity. At this stage, the tools can assist the designer in mak ing intelligent choices about the disposition of each block and discover the impact on the resulting implementation. The trade-off process can continue until the optimal partitioning solution is found. The key to performing this real-time search of the solution space is the ability of the HLL to quantify each block of the system specification, and to do so rapidly for a variety of metrics that may be important to the designer. The metrics may include throughput, latency, resource utilization and power consumption.
Given an optimized partitioning of the specification blocks to the target SoC architecture, the HLL compiler can begin the process of mapping those blocks to the designated resources. Mapping targets a specific portion of the specification to a specific architectural component. The mapping process is made up of two steps: collapsing multiple blocks into a single block; and defining communication mechanisms between blocks. Once the partitioning process has been completed, the HLL compiler ca n look at all blocks allocated to each resource, and where data dependencies are met, collapse them into a single block. This can greatly reduce the communications channels necessary to manage data flow between blocks, and will allow the target-specific compilation step to perform additional optimizations. For the compilation of the HLL, Xilinx Forge is used, which takes Java descriptions and generates synthesizable register-transfer level Verilog.
Platform-based system-chip architectures and the design methodologies to target them are still being defined and refined. However, it is clear that certain characteristics are critical for the viability of this paradigm. First, the system-chip architectures must include significant hardware programmability. Embedded processors are by their nature programmable. The hardware resources sitting next to them must share a degree of flexibility to enable optimal system implementations over a range of applications. Second, HLL compilation is key to design producti vity and leveraging these platforms' potential.
Copyright © 2003 CMP Media, LLC | Privacy Statement