As more traffic-voice and e-commerce-migrates to Internet Protocol-based networks, it has become essential to ensure that the systems making up this important infrastructure be reliable. The reliability target for today's Internet processing engines is what's called "five-nines" (up 99.999 percent of the time).
As part of our product development, Cisco Systems' high-end core router group routinely integrates error-correcting circuits (ECCs) into designs. These blocks of code, which are commonly used in networking, perform the critical functions of verifying and correcting data streams. Once developed, the ECCs are often used multiple times in a single design and can become part of numerous high-end routers.
The critical nature of the ECCs makes functional verification crucial. This article describes how our design group uses symbolic simulation, which is a semiformal verification technique, to verify our ECCs. It covers the application of symbol ic simulation to the two types of ECC blocks most commonly used by the high-end router group: (255,251) Reed-Solomon (RS) code and distance-3 Hamming code. Both codes are single-error-correcting and double-error-detecting.
We were directed to apply formal techniques to rigorously prove the correctness of our ECC blocks. However, ECCs are complex and difficult to formally verify. For example, the RS encoder/decoder has complex feedback loops, numerous memory elements/flops and arithmetic transformations.
Binary simulation, which is tightly embedded into our group's verification flow, allows only limited coverage, leaving many "holes" in the verification process. Formal proof would have required exhaustive simulation, involving trillions of vectors-which would have taken years to complete.
Model checking was considered as an alternative. However, model checkers are limited by the number of state elements in the design, require a proprietary language and can be difficult to integrate into an existin g design flow. We believe that model checkers are useful for state machines and random logic but not suited to data-oriented designs like ECCs, which encode and decode data streams. Although the number of flip-flops is large, the state space they represent is sparse.
Equivalence checking is another popular formal way to verify an implementation against its specification. This technique was not considered because we were concerned with the formal verification of the specification itself. Our inputs were register-transfer-level (RTL) implementations of the encoding and decoding algorithms of the ECC codes.
We learned about a novel semiformal verification method called symbolic simulation. The methodology was first described by IBM in 1979 and, in 1985, a professor at Carnegie Mellon University created the first working, bit-level symbolic simulator. The technology is being used in internally developed symbolic simulators at Intel and other companies.
Symbolic simulation does not explore the state s pace, is not limited by the number of flops and is considered very effective for data-oriented designs. It is the enabling technology for InnoLogic Systems' ESP tool, which Cisco Systems purchased and used for the verification described in this article.
Symbolic simulation allows designers to simulate multiple binary vectors (or vector streams) together as one simulation. This is achieved by allowing the simulator to accept "symbols" as inputs just like binary 0 or 1. These symbols represent both values 0 and 1 simultaneously on the input to which they are applied. Like binary simulation, symbolic simulation requires a testbench to apply input stimulus and check output results. The only difference is that symbols need to be applied as inputs wherever desired in a symbolic simulation testbench. The advantage of symbolic simulation is that 2n (n=number of symbols) equivalent binary vectors are investigated simultaneously.
Verification of the RS encoder is straightforward. S ymbolic simulation allows easy verification by applying symbols at the data inputs and checking for a legal code word at the output. The verification of the RS decoder, however, is more interesting.
We divided the input space of the decoder into 1) legal code words, 2) legal code words with exactly one error, 3) legal code words with exactly two errors and 4) the rest of the input space. We then created a symbolic testbench for the first three scenarios. Some work was required to constrain a set of input symbols to the required input space in each case.
Since the output code is single-error-correcting, the first two testbenches check that the legal code word output is always 1 and the third testbench checks that this output is always 0 (i.e., no double error is erroneously detected as a legal code word). Additionally, for the first two testbenches, expected legal code words are precalculated in terms of the input symbols. The decoded output is checked against those expected legal code word expression s. Fig. 1 illustrates our verification flow.