The current trend is to develop embedded systems using platforms common to many different applications spanning multiple industries, particularly telecommunications. But the advantage gained by using embedded platforms is only possible if the platforms are built to support the requirements of the system.
The wide variation in configuration and usable applications offers a challenge in certifying that embedded platforms are robust and that they perform with no single point of failure under most stress conditions. Testing such platforms adequately within the business constraints requires building a test strategy into the overall development process.
When testing an end system, the common approach is twofold: first, define test plans and test cases for the various functions supported by the system, and second, conduct testing based on how an end user will operate the system. Special attention is also required for testability at the platform level to avoid problems in the field. Designing for testability offers many new challenges when considering platforms vs. end systems built on the platforms.
Consider a network element such as an optical cross-connect with a number of line cards supporting different rates and protocols. In building such a system, the functional requirements are specified in detail, and typical configurations for the system are known in most cases. Therefore, testing the embedded end system can be done with a comprehensive set of regression tests, with test coverage for most of the expected configurations.
Testing the same embedded system based on a flexible platform becomes significantly more complex. One can define a typical configuration for the expected applications, however, so the flexibility of the platform opens up choices not usually available in an end system. That said, testing for all applications and configurations is not realistic. A method for bounding the problem is necessary. One method is to identify a number of use cases that meet the needs of a typical application. Each use case may have a number of scenarios specified with reference to a sample set of configurations.
The use cases also include the expected behavior under both normal operating conditions and exception conditions. The exception conditions are further exercised in test cases by performing what is referred to as negative testing.
An example test configuration could be where two blade server platforms are connected together and a subset of the modules is selected. With five types of modules, two shelves and 14 slots in each shelf, the configuration combinations that would have to be tested to provide exhaustive test coverage could be on the order of 109, assuming all slots are universal. This is clearly impractical, so alternative means are necessary to provide the requisite test coverage.
An alternative approach is to incorporate a methodology based on functional requirements. The platform is tested to verify the following as a minimum:
- Functional requirements under normal conditions;
- Functional requirements under operational stress and fault conditions
- Functional and thermal requirements under environmental stress conditions;
- Mechanical verification under stress conditions, and
- Regulatory compliance verification.
When testing embedded systems, OEMs must also take into account both the environmental factors and the application. Systems involved in life sciences will need additional testing to reflect many of the safety aspects needed for use on people. Similarly, mission-critical systems and military systems will require different aspects of testing.
While open architectures and standards do not reduce or eliminate the testing burden. they do promote excellent economy of scale and allow designers to leverage standard embedded platforms that are completely tested and validated for the right application.
Venkataraman Prasannan (firstname.lastname@example.org) is senior director of product line management and Lakshmi Raman is senior director, system engineering for RadiSys Corp. (Hillsboro, Ore.).
See related chart