There are a bewildering array of system-on-a-chip devices available for the systems designer to select from. And given the current market conditions, representatives from every single one of these is prepared to visit you and take several hours to share with you excruciating detail about their particular product. The aim of this article is to highlight three key constraints that a designer can use to efficiently narrow down the list of appropriate device candidates. It assumes that the approach of designing as ASIC is not an option, either due to the anticipated volume not justifying the NRE and/or that the time to develop such a solution is too lengthy.
Performance/Power: These criteria can be viewed as the initial bar that the competing solutions must clear. If the device can't deliver the desired performance or meet a specific power requirement, even if the part were to be given away, it couldn't be used in the system. The tougher element of the se two to specify is performance, since it is challenging to correlate generic performance attributes (CPU frequency, theoretical memory bandwidth, etc.) to the device's capability in the specific end application. As this is an initial bar, the strategy here should be to use a “back of the napkin” type of analysis to develop the criteria. Once the competition field has been narrowed, a more time-consuming analysis can be undertaken such as running critical application code on supplier's evaluation boards. Power dissipation is simpler. Most suppliers document this attribute very well. Be careful to check for worst-case power numbers. Designs cannot rely on typical values.
Software Development Requirements: The impact on software development of selecting a particular device is becoming the next important driver. No matter how compelling a specific device may look on paper, factors such as the need to switch development tools, operating systems, etc. can make a project based on such a device too risky or e xpensive. The proliferation of operating systems such as Linux and the delivery of increasingly more hardware-specific elements of the system software by the semiconductor supplier will probably assist in reducing this barrier over time. In the short-term though, the likely impact is that hardware designers may be forced to select from a smaller set of devices that share a preferred CPU architecture.
System Flexibility: Is this system required to be a platform that will spawn additional variants (either due to the requirement to efficiently support various combinations of I/O connectivity and system functionality or as the specific requirements for the end market are unclear) or is this a board that it is a somewhat evolutionary point-product in a segment that is relatively stable with known and fixed set of performance and I/O connectivity requirements. The broader the span of functional requirements for the platform, the more effective a multi-chip strategy is. For example, the use of a discrete CPU (x 86, PowerPC or MIPS-based) coupled with an I/O subsystem (using FPGA or system controllers) can allow a platform to deliver dramatically higher performance with minor modifications.
From a technology point of view, the use of software to scale the functionality of a system has some limitations.This approach will tend to leverage devices which dissipate higher power consumption than more integrated alternatives. This could impact the thermal design for the system. Also, as CPUs tend to use advances in process geometry to drive increases in CPU frequency, higher-end CPU cores tend to run from lower and somewhat non-standard voltages. The need to support multiple performance points, may add to the complexity of the power circuitry. Simply scaling the CPU frequency does not necessarily guarantee success, since the bottleneck for true embedded processing tends to be useable memory bandwidth. Therefore if this approach is taken, care should be taken to ensure that the performance of the CPU core AND the main memory subsystem can be scaled together in a balanced fashion. As an example, this could involve leveraging a memory controller design that can operate at a range of frequencies and that can be widened to support higher theoretical memory bandwidth.
As mentioned above, there are two primary approaches to the I/O subsystem An off-the-shelf system controller is probably the preferable route, providing one exists that delivers the desired set of functionality, since these will typically be offered with proven software and be lower cost than an FPGA, which provides much more flexibility, system customization and differentiation opportunities.
If a more integrated solution is deemed suitable (for example, as the application is supporting a well defined, mature segment), then again the system designer is again faced with the choice of using either an application-specific standard product (ASSP) and/or an FPGA. If the ASSP delivers all of the functionality required for an application, then this will offer the quickest time-to-market and cheapest system solution. Remember though, that all of our customers have access to the exact same silicon. Therefore, customers taking a purely ASSP approach will need to feel comfortable that they can deliver valuable differentiation (for example through software) and ensure that the investment in developing this differentiation is protected.
The emerging class of programmable system-on-a-chip (SoC) and structured ASIC devices combine fixed-function logic with programmable logic on a single piece of silicon. As the figure shows, this can offer some benefits over the use of an ASSP and discrete FPGA. Whichever approach you select for a single chip solution, a word to the wise. Even if the segment is stable, at least give some thought to embedding an expansion capability. As an example, ensure that the selected solution has a path to interface to industry-standard I/O interfaces. My mantra is that PCI, while expensive from a pin count perspective, can be regarded as a “ge t out of jail free” card, since emerging technologies will typically target this interface in order to access the PC segment. Due to the high pin count, the expansion bus of choice for lower performance peripherals may become USB over time. Again, the expansion bus of choice does not need to be on the ASSP itself. But there has to be a credible path to access it using an external component.
Hopefully this approach has narrowed your list down to 2 or 3 potential candidates. This point is probably the appropriate point to discuss specifics about the devices under consideration with the vendor, carrying out more detailed performance benchmarking, and discussing the supplier's commitment to continue to deliver, innovate and support the product family.
Ian Ferguson is vice president and general manager for QuickMIPS Products at QuickLogic (Sunnyvale, Calif.)