By Vincent Perrier, Co-Founder and Director, CoFluent Design
This article aims at defining system architecting and explaining its importance in today’s complex designs. System architecting is placed and analyzed in the context of industry practices and trends in electronic system design. It is observed under the scope of architectural exploration and prospective performances analysis. Once defined, requirements for system architecting are listed and a proper solution is proposed in the form of the CoFluent Studio SDE (System Design Environment).Technological mutations and market pressure
Electronic devices are pervasive in our every day lives for communications, entertainment, automation, transport, security and other purposes.
The electronics industry is an incredibly fast changing and competitive environment, driven by perpetual demand for novelty and technological progress. As electronic manufacturers need to develop rich features that will differentiate new products, devices achieve more and more functions and get more and more complex. Their technologies are characterized by ever increasing integration, miniaturization, performances and other physical constraints.
Profitability imperatives create a tough economical context in which companies need to address new markets with maximized differentiation and have to keep on delivering innovative and cheaper products in shorter times.
Difficulties that electronic systems developers face are well known. They are the result of the two contradictory driving forces described above: increase in complexity and decrease in time and costs.The rise of performances as a critical problem in system design
Architectural complexity and issues are quickly rising when designing electronic systems, driven by the following factors:
- Extraordinarily complex functionality is implemented in systems
- Complete hardware and software multiprocessor systems are integrated into a single silicon circuit called System-on-Chip (SoC)
- Systems integrate an ever-increasing number of external ready-to-use components (or IP blocks)
- Interrelated components in a system as well as systems themselves communicate with each others
Architectural design is not only about choosing and assembling components together to create a coherent system; it’s also making sure that the chosen architecture is the right one, capable of successfully delivering the system’s functionality while respecting other constraints such as timing constraints, costs, power consumption, etc.
Mastering the architecture of a system requires analyzing its performances for specification (when the system does not exist yet) or verification (when the system is already designed) purposes.
As a system is a set of interconnected components, performances have to be studied at 3 levels:
- Inter-components (communications)
At each level, performance criteria have to be defined and for each criterion, acceptation levels have to be set. Performance criteria vary depending on the nature of the observed component, they can be related to a workload or any execution parameter (e.g. speed).
For example, a component such as a software processor (running a RTOS) can be characterized from the performances point of view by its utilization rate (acceptation level can be less than 80%). A communications node such as a bus can be characterized by its throughput, a shared memory by its read/write latencies.Defining system architecting
System architecting implies defining the system architecture in terms of the number and nature of components in the system. It encompasses adequate system partitioning (defining which parts of the system should be implemented in hardware or software) and dimensioning (defining performance constraints for hardware and software parts of the system and their interconnections).
This definition implies several important considerations:
- Designers must find a way to describe their system and its constitutive parts with no preconceived idea on their final implementation (hardware or software)
- Designers must find a way to analyze and set performance constraints on their system at the 3 levels previously identified
When architecting a system, 3 scenarios can happen:
- The system hardware is already defined: it’s either a ready-to-use commercial platform or a reused (in-house or external) design
- The system hardware does not exist yet and has to be designed from scratch
- A mix of the two above: the system hardware is derived from an existing design
In the first scenario, the system architecting activity consists in optimally allocating system parts to the existing programmable and configurable hardware components.
In the second and third scenarios, full system partitioning and dimensioning is required. In addition, designers can choose among multiple architecture choices. Experience, design by analogy and very often non-technical (economical, commercial, etc.) imperatives guide designers in their initial architectural decisions. But at some point, the only viable solution to get to a complete architecture is to explore the design space by trial and error and execute “what-if” scenarios. We call this process architectural exploration.Application vs. platform design: different sorts of performances analysis
It is important to differentiate prospective performances analysis that help define/specify performance constraints of a future system, from confirmative performances analysis that help verify performance constraints of an existing design are respected.
System architecting implies that the system, or at least part of the application running on a platform that already exists or will be derived from an existing one, is not defined yet. This necessarily means that the design process follows a top-down flow, starting from high-level requirements down to detailed design solutions.
This approach has to be differentiated from others (such as platform-based design or component-based design) aiming at creating reusable platforms, that usually follow a bottom-up process by aggregating existing components (reusing IP blocks) together.
Thus, in a simplified view, we can divide system design in two distinct processes: the bottom-up design of the platform, and top-down design of the application. The first process aims at offering a development platform to application developers, while the second aims at deploying an application on a platform. These two processes necessarily meet at some point, when the platform is ready to host an application, and the application ready to be hosted on a platform. We call that point the “meet-in-the-middle point”. This is where performances analysis and architectural exploration take place.
During application design, prospective performances are analyzed for system specification purposes. During platform design, confirmative performances are analyzed for system verification purposes.Making prospective performances and architectural exploration possible
Let’s consider now what the essential factors that make prospective performances analysis and architecture exploration possible are.
A design process consists in a series of steps that aim at refining models representing the system at hand, until physical implementation is possible. This is a classical application of the divide-and-conquer method that allows designers to get from high-level requirements down to the bits and bytes of electronics. To each design step corresponds a different model used to represent the system. Each model is characterized by its own level of abstraction.
The meet-in-the-middle point (meeting point between application design and platform design) is defined by the common level of abstraction used to describe the application and platform models.
Before choosing whether a part of a system should be designed in software or hardware, designers have to represent what the system does (its functions) with no considerations whatsoever on its nature (software or hardware). Functions of a system can be designed and verified independently from any technological considerations by creating a functional model. Functional design helps designers concentrate on the application, with no limitations induced by physical considerations.
Architectural exploration is possible if the same functions can be simulated on different hardware platforms, what implies defining a separate model from the functional model for representing the system hardware that we call executive structure. Architectural design results in a description of the complete system architecture. It’s complete when functions are mapped on elements of the executive structure as software or hardware parts. Hence, architectural exploration consists in trying different mapping strategies on multiple executive structures.
Functional and architectural designs can be represented in what is called the “Y” development cycle.
Introducing the message abstraction level
- For this mapping to be possible, functional and executive structure models have to be defined at the same abstraction level. As architectural design marks the end of application design, it takes place at the meet-in-the-middle point. This requisite dictates a first constraint on the meet-in-the-middle abstraction level that has to be high-enough to allow technology-independent and implementation-free functional design.
- In addition, the meet-in-the-middle abstraction level has to be high enough to allow an interactive architectural exploration process, i.e. the capacity for a designer to simulate a complete model, get performance results quickly, make changes in the system architecture, and enter a new observation cycle. Architectural exploration requires a short simulation cycle, changing the architectural model easily and quickly; it has to be supported by interactive tools in a way that resembles a software edit/compile/debug cycle. Too low levels of abstraction make respecting such requirements impossible.
- Last, the meet-in-the-middle abstraction level has to be low enough to enable fully timed behavioral simulation delivering detailed enough results including performance measures with 5% or less accuracy compared to real-world behavior.
Today’s platform-based design co-simulation environments move from the register transfer level (RTL) up to the transaction level. Transaction level modeling (TLM) requires introducing technology-dependent (address-based) information in a functional model. Hence, the transaction level can’t be the meet-in-the-middle abstraction level as it breaks the first requirement enounced before.
With the MCSE methodology and the CoFluent Studio toolset, CoFluent Design introduces the message abstraction level, above the transaction level.
Any kind of model always describes 3 types of views on a system: structural/organizational, behavioral and communications. The most significant view for characterizing an abstraction level is the communications one.
There are typically 5 levels of abstraction for communications: service, message, transaction, transfer and register transfer (or logic/gate).* A communications model corresponds to each abstraction level, defined by its time accuracy/dependency and its types of: media, addressing, data and protocol.
The table below gives an overview of the different communications abstraction levels. *Abstraction levels definition and table courtesy of Jean Paul Calvez and Gabriela Nicolescu, excerpt from chapter 2 Spécification et modélisation des systèmes embarqués of the book edited by A. A. Jerraya and G. Nicolescu: La spécification et la validation des systèmes hétérogènes embarqués (publisher: Hermes, Collection "Techniques de l'ingénieur").
- The CoFluent Studio message abstraction level is high enough to enable high-speed (estimated 1000x faster than RTL simulation) hardware/software co-simulation used in an interactive development process for verifying very complex systems.
- Models described at the message level are precise enough to represent accurate real-time system behavior. Their simulation delivers accurate performance data (with an estimated error margin around 5%). Performances analysis is based on the configuration of simple macroscopic performance parameters attached to the system’s architectural components. These parameters describe the properties of a component from the performances point of view and vary depending on the type of component. Setting performance parameters does not require separate edition or simulation; developers get high-level simulation results for early and easy co-verification, avoiding any paradigm shift from modeling to verification.
- Performance parameters are generic enough so they can be observed on the development host platform using a native simulation technology such as C++ or SystemC. Performances analysis does not require a dedicated target hardware as for lower-level models.
- Message-level simulation delivers high-level results that can be analyzed and understood with complexity and quantity of data levels that can be handled by a human being. RTL simulation technologies are closer to a real-world execution than higher level models, but simulation cycles are extremely long and significant results can be difficult to obtain and exploit when considering system-level concerns.
- Furthermore, CoFluent Studio’s flexible architecture modeling capabilities and separated functional and executive structure models enable designers to easily explore multiple architectural choices.
- Last, the description of the system’s behavior is detailed enough to generate code automatically for prototyping or implementation/synthesis purposes. C code for a real-time kernel and synthesizable VHDL code are used for respectively software implementation and hardware simulation/synthesis.
The following picture gives an overview of the CoFluent Studio SDE (System Design Environment) and how it fits in the complete system design process.
For more information, please consult www.cofluentdesign.com
Ever-increasing complexity and integration in electronic devices make system architecting more and more important and difficult. Being capable of analyzing and setting performance properties of a system is key to success in mastering its architectural design.
System design can be divided into 2 complementary processes: top-down application design and bottom-up platform design. The 2 processes meet when the application is ready to be deployed on a platform and the platform ready to host an application. This meet-in-the-middle point (meeting point between application design and platform design) is where performances analysis and architectural exploration take place. It is defined by the level of abstraction used to describe the application and platform models.
During application design, prospective performances are analyzed for system specification purposes. During platform design, confirmative performances are analyzed for system verification purposes.
During application design, system architecting aims at defining the system’s platform on which system functions are mapped. It encompasses system dimensioning and partitioning and can be achieved through prospective performances analysis and architectural exploration.
Architectural exploration and prospective performances require:
- Separated functional and architectural representations of a system
- Short simulation cycles
- Interactive architectural exploration process and tools
- Fully timed behavioral simulation delivering detailed enough results including performance measures
CoFluent Studio allows designers to describe the functional and architectural models at the message abstraction level enabling:
- Separated functional and architectural modeling
- Interactive exploration cycles thanks to native host-based simulation technology (C++ or SystemC) delivering high simulation speed
- Accurate real-time system behavioral description and simulation
- Good enough macroscopic performances analysis (estimated 5% error margin)
- Results produced in complexity and quantity levels that can be handled by a human
- No paradigm shift from modeling to verification
- Automatic code generation for implementation and synthesis purpose