Case Study: Choosing the Right Benchmarks for the Job

Submitted by BDTI on Tue, 10/23/2012 - 12:32

As embedded processors and applications become increasingly complex, good benchmarks are more important than ever. System designers need good benchmarks to judge whether a processor will meet the needs of their applications, and to make accurate comparisons among processors. Processor developers need good benchmarks to assess how their processors stack up against the competition, and to prove their processors' capabilities to customers.

But what exactly comprises a good benchmark?

One obvious requirement is that the tasks performed by the benchmark must provide a good representation of the tasks performed by the application of interest. Trying to assess video compression performance using a packet-processing benchmark is like trying to weigh something using a thermometer.

A second characteristic of a good benchmark is less obvious, but equally important: The manner in which the benchmark is implemented must reflect the manner in which the application will be implemented. For example, if the application is going to be implemented by compiling plain C code with no optimization, then that's how the benchmark should be implemented. On the other hand, if the application is going to be carefully optimized for the target processor (taking into account memory organization, instruction set, pipeline, etc.), then the benchmark must also be carefully optimized for the target processor.

In embedded digital signal processing applications, performance-critical code is usually carefully optimized. This is true regardless of whether the target is a DSP processor, MCU, CPU, or something else. And it continues to be true even though applications are becoming larger, processors are becoming more complex, and development teams are shrinking. Development teams have less time for optimization, but that doesn't mean that performance-critical code isn't optimized. Rather, it means that the optimization is often done elsewhere; by the processor vendor, a third-party software component provider, or a consulting firm specializing in optimization.

Since the performance-critical application code is going to be optimized, a benchmark intended to represent the performance of processors in the application must also be optimized. For this reason, BDTI benchmarks are carefully optimized for each target processor. Optimization of the benchmarks is sometimes done by BDTI, in which case the processor vendor's engineers review BDTI's work to see if further improvements are possible. In other cases, the optimization work is done by the processor vendor's engineers, in which case BDTI, as part of the process of certifying the results, carefully reviews the code to see whether additional optimization is possible. By ensuring that the benchmark implementations are thoroughly optimized, BDTI ensures that the results are meaningful.

Recently, BDTI was engaged by a processor vendor to perform a competitive analysis comparing a set of competing processors targeting specific signal processing applications. Because several of the processors being studied were very new, there was little benchmark data available, and what was available was not always reliable. Using its detailed knowledge of signal processing algorithms, processor architectures, and optimization techniques, BDTI was able to separate the reliable data from the unreliable data. This enabled BDTI's client to obtain an accurate picture of how the competing processors stacked up, despite the limited data available.

If you're a processor supplier in need of objective competitive analysis, or credible evidence of your processor's advantages, contact Jeremy Giddings at BDTI (giddings@BDTI.com) to learn more about how BDTI can meet your needs. If you're a system or SoC designer seeking realistic, fair comparisons of processor alternatives, contact Jeremy to get access to BDTI benchmark results for dozens of processors.

Add new comment

Log in to post comments