Higher mask cost and increasing minimum lot sizes, two economic trends of the semiconductor industry, are making FPGAs increasingly more cost effective compared to the competing ASIC solutions. As a result of these two trends, the market share of FPGAs and, along with it, the "value" of designs implemented in FPGAs continues to grow. As the FPGA design "value" increases, so does the need for "design security" in FPGAs. At the very least, the design community would like to duplicate in FPGAs the level of design security they had with ASIC technologies. This paper describes several distinct design security issues and concepts, the contrasts between the design security of competing FPGA technologies (SRAM, antifuse, and Flash) with the incumbent ASIC technology. A new business model enabled by the security capabilities of nonvolatile antifuse and Flash-based FPGAs will also be discussed.
2. Design Security Concerns
There are two distinct classes of design security needs (Figure 1):
- Intellectual property (IP) security: the designer wants to protect the design or IP in the FPGA or ASIC platform from being "cloned" or reverse engineered.
- Data security: the designer wants to prevent the data being sent to or from the FPGA or ASIC platform from being copied, corrupted, or otherwise interfered with.
Figure 1: Classes of Design Security Needs
IP security is the primary concern of companies or IP developers whose competitive advantage is derived from their ability to implement the design. It is also the primary concern for manufacturers of mid to high volume consumer electronics whose market share and profitability is eroded by "knock-off," "cloned," or counterfeit versions of their product.
Data security is the primary concern in cryptographic or financial applications. Users of such applications include the military (nuclear weapon systems or communication systems), financial institutions (automated bank tellers), consumer electronics manufacturers (pay TV and set-top boxes) and corporations sensitive to copyright infringement (game manufacturers).
IP security, the primary focus of this paper, depends upon three factors:
- The value of the design or secret being protected. This can range from nearly infinite (securing a trigger for a nuclear device) to very low (the stored value on a single phone card). Often the value has a time component (the market window for a consumer product or the usable life of a set-top box).
- The cost to implement security measures. These can very from zero (do nothing) to very high (epoxy encased circuit boards installed in locked chassis with round-the-clock surveillance or a cryptographic key infrastructure).
- The expected cost to attack or defeat security measures. These can vary from very low (cost of copying a boot device for an SRAM FPGA) to prohibitive (evading surveillance to remove and reverse engineer an epoxy encased circuit board).
The cost the designer will incur to protect his design will be proportional to the value he assigns to the design. Security achieved is proportional to the difference between the value of the design and the cost to attack or defeat it.
3. Defending a Design from Attack
The ability to reverse engineer an integrated circuit can be rated at three different levels, corresponding to varying degrees of security. Abraham et al’s article, "Transaction Security System," in the IBM Systems Journal discusses the three levels:
Level I: Devices are insecure because they can be easily reverse engineered by a somewhat knowledgeable individual with low cost, easily accessible tools. These people are usually interested in end user products such as phone cards, debit cards, and set-top boxes.
Level II: Devices are moderately secure because reverse engineering can be done by a highly knowledgeable individual, often someone with inside knowledge and who has access to expensive lab equipment. Individuals involved in reverse engineering at this level are usually associated with a commercial enterprise such as a game copier.
Level III: Devices are highly secure, and reverse engineering can only be done by a government-supported lab with unlimited resources such as the NSA.1
ASICs are Secure to a Level II Attack
On its own, ASIC technology (standard cells and, to a lesser degree, gate arrays) is thought to be Level II. This technology has been employed in all of the security scenarios previously mentioned (military, financial, etc.). In applications requiring security from Level III capable attacks, additional measures such as epoxy encasing and explosive devices must be employed.
Methods of attacking ASIC technology, cited in Blythe et al’s article "Layout Reconstruction of Complex Silicon Chips," involve expensive equipment and tools, including:
- Cleanly imaging and etching away successive layers of a device and post-processing the images to render clean polygon images of the circuitry. This technique was employed to reverse engineer an Intel 80386 in two weeks.2
- Building on the above techniques, Chipworks Inc. has developed software to automatically generate circuit schematics from the polygon images.
SRAM FPGAs are Susceptible to a Level I Attack
As mentioned earlier, market forces are enabling FPGA technology to continually capture market share from competing ASIC technologies. However, as the value of the designs implemented in FPGAs increases, security limitations of the dominant SRAM-based FPGA technology begins to limit market penetration potential. The security limitations of SRAM-based FPGA technology are well known, as the devices are easily cloned by copying a bitstream sourced to the SRAM FPGA by either a nonvolatile boot PROM or a microprocessor (Figure 2 on page 7).3 This corresponds to a Level I attack.
Some SRAM FPGA manufactures have acknowledged this limitation by incorporating a defense against this cloning attack in their latest generation devices. This defense is comprised of an on-chip bitstream decryption engine with an on-chip key that is loaded into battery powered on-chip memory on the board by the board manufacturer (Figure 3). The bitstream loaded in the boot PROM can then be encrypted and therefore is not usable for cloning without knowledge of the on-chip key. While this defense is effective, it does come with significant costs to implement, including:
- Cost to implement and maintain an encryption key database or infrastructure at the board manufacturer.
- Reliability cost due to the battery powered key storage mechanism. If the battery fails in the field, the board will fail.
Figure 2: Cloning an SRAM FPGA
Figure 3: SRAM FPGA with On-Chip Bitstream Decryption
Nonvolatile Flash and Antifuse FPGAs are more Secure than ASICs
In contrast to the inadequate, easily cloned, SRAM FPGAs, there are two nonvolatile FPGA technologies that are even more secure than competing ASIC technologies. They are antifuse-based FPGAs and Flash-based FPGAs. These two technologies derive their security from:
- Nonvolatility, which enables them to be configured before they are shipped to the end-user. Unlike SRAM technology, there is no bitstream that can be intercepted.
- Difficulty in determining the state (on or off) of the programming elements on a programmed part. In contrast with easily visible vias on an ASIC, it is very difficult to determine whether a given programmable antifuse or Flash switch element is on or off.
- A large number of switch elements (millions on the largest devices). Given that the state of a single switch is difficult to determine, trying to determine the state of millions is prohibitive.
Direct Physical Attack of Antifuse FPGAs
As previously stated, determining the state of an antifuse is exceedingly difficult. Antifuse-based FPGAs use a small piece of dielectric, usually smaller then 1µ square, as an open switch between two metal lines. Where a connection between two metal lines is desired, a programming pulse is used to short out the dielectric. This short is less than 100 nano-meters in diameter. These shorts are not visible when viewed from the top. Therefore, in order to physically identify them, it is necessary to de-process or cross-section the devices. Rather than being a precise method, this involves trial and error and typically requires that several cross-sections be done to find just a single link shorting out the dielectric (Figure 4).
Direct Physical Attack of Flash-based FPGAS
As with antifuse-based FPGAs, Flash-based FPGAs utilize switches to connect and disconnect intersecting metal lines. A single floating gate is charged or discharged to set the state of a switch that connects two metal lines (Figure 5 on page 9). Since there is no physical change in the programming device or switch device, there is nothing to detect by any material analysis; there is only a change in the number of electrons on the floating gates. Because there is no observable change in the Flash-based switch when programmed, a Flash-based FPGA is more difficult to reverse engineer than even an antifuse FPGA.
Figure 4: Cross Section of a Programmer Antifuse
Note: The programming device is on the left and the switching device is on the right.
Figure 5: Schematic View of the Flash-Based FPGA Switch
4. Other Methods of Attack
It has been shown that ASICs can be physically attacked in a straightforward manner that relies on the visibility of connections or vias between metal layers. It has also been proven that such a direct attack is extremely difficult, if not impossible, against antifuse or Flash-based FPGAs due to the difficulty in physically observing the state of millions of switches. Several other advanced methods of attack have been developed that may be employed against any technology, but with considerable expense and/or difficulty:
- IBM developed a very advanced technology that allows one to actually look at the logic states of the metal lines.4 This is accomplished by placing a crystal of lithium niobate over the feature whose voltage is to be monitored. The refractive index of this substance varies with the applied electric field, and the potential of the underlying metal can be read out using an ultraviolet laser beam passed through the crystal at grazing incidence. This technique allows a 5.0V signal of up to 25 MHz to be read.
- Another technique (recently declassified) developed at Sandia Laboratories utilizes an infrared laser to which the silicon is transparent. It is then possible to shine the laser from the backside to induce photocurrents that are affected by the logic state and thus determine the logic state of a specific transistor.5
Finally, both antifuse and Flash-based devices are architecturally designed to prevent attack on a programmed device with a programmer or by other electronic means (Figure 6 on page 10). Both contain circuitry to lock the device by disabling the programming and readback capabilities after configuration. Care has been taken in the design to make the locking circuitry difficult to defeat through either electronic or direct physical attack. In antifuse FPGAs, as fuses are programmed, it becomes impossible to uniquely address previously programmed fuses making programming essentially a one-way function. Thus the architecture of the antifuse-based devices makes electronic readback impossible, even if the locking mechanism is not used.
5. Unique Business Models with Secure Nonvolatile FPGAs
The last decade has seen a dramatic shift in the semiconductor industry from a few horizontally integrated companies to dozens of fabless semiconductor companies using the services of silicon foundries. The next big shift expected was continued vertical fragmentation with hundreds of design services and intellectual property companies providing their services to system level architects and integrators. That expectation has failed to materialize and the design services and intellectual property providers are still struggling with how to capture their share of the value chain.
Figure 6: Defense Against a Programmer Attack
Secure nonvolatile FPGAs offer solutions to overcome two large barriers in this struggle. The first barrier is simply security. Just the concern that the design services company has in protecting their intellectual property sets in motion legal and administrative activities and costs that, in the end, prove too high to justify all but the largest engagements. The industry has spent a lot of effort developing design encryption schemes that protect the design at the netlist level, but it is still exposed to a trivial cloning attack if implemented in an SRAM FPGA. In contrast, a nonvolatile antifuse or Flash-based FPGA prevents such an attack and offers even more security than an ASIC implementation.
The second dampener on the industry is that the design services company has no easy and trustworthy way to charge a royalty for their services. All the money must be made in up-front licensing fees that, again, can only be justified in the largest engagements. With secure nonvolatile FPGA technologies, the design services company can become a virtual ASIC company and ship pre-programmed FPGAs with a nominal mark-up or royalty charge above the cost of the unprogrammed FPGA. If the FPGA vendor is trusted with the programming files, end customers can order pre-programmed units from the FPGA vendor who takes care of charging the end customer for the mark-up and forwarding the mark-up amount to the design services provider (Figure 7 on page 11). This flow eliminates operational costs for the design services company while allowing them to make a nominal amount for each unit rather than trying to charge for the entire engagement up front.
Figure 7: Using Nonvolatile FPGAs to Secure Royalty Streams
As the complexity, capabilities, and market share of FPGAs increases with respect to competing FPGA technologies, the need for securing the designs implemented in FPGAs increases. SRAM FPGAs are inadequate in this regard as they are exposed to a Level I cloning attack. On the other hand, nonvolatile antifuse or Flash FPGAs are even more secure against attack than the ASIC technologies they are replacing and therefore satisfy an increasingly important market requirement. Furthermore, the programmability and security provided by these technologies can be employed to solve a latent need in the semiconductor industry: the need for design services and intellectual property suppliers to claim their share of the value chain by charging a royalty over the life of the design rather than having to get all the value in up-front licensing.
Abraham, D.G., G.M. Dolan, G.P. Double, J.V. Stevens, “Transaction Security System.” IBM Systems Journal. Vol. 30, No. 2, 1991. 1. Abraham, D.G., G.M. Dolan, G.P. Double, and J.V. Stevens. “Transaction Security System,” IBM Systems Journal vol. 30 no. 2 (1991): 206-229.
Ajluni, C., “Two New Imaging Techniques Promise To Improve IC Defect Identification.” Electronic Design Vol. 43, No. 14, 10 July 1995.
Algotronics Consulting. “Secure Configuration of Field Programmable Gate Arrays.”
Blythe, S., B. Fraboni, S. Lall, H. Ahmed, U. de Riu, “Layout Reconstruction of Complex Silicon Chips.” in IEEE Journal of Solid-State Circuits Vol. 28, No. 2, February 1993.
Wiesenfeld, J.M., “Electro-optic Sampling of High-Speed Devices and Integrated Circuits.” IBM Journal of Research and Development. Vol. 34 No. 2/3 Mar/May 1990. See also subsequent articles in the same issue.
2. Blythe, S., B. Fraboni, S. Lall, H. Ahmed, U. de Riu. “Layout Reconstruction of Complex Silicon Chips,” IEEE Journal of Solid-State Circuits vol. 28 no. 2 (Feb. 93): 138-145.
3. Algotronics Consulting. “Secure Configuration of Field Programmable Gate Arrays.”
4. Wiesenfeld, J.M. “Electro-optic Sampling of High Speed Devices and Integrated Circuits,” IBM Journal of Research and Development vol. 34 no. 2/3 (March/May): 141-161.
5. Ajluni, C. “Two New Imaging Techniques Promise to Improve IC Defect Identification,” Electronic Design vol. 43 no. 14 (10 July 1995): 37-38.
©2003 Actel Corporation All Rights Reserved. Actel and the Actel logo are trademarks of Actel Corporation. All other brand or product names are the property of their respective owners.