Intra-Panel Multi Strandard TX on Samsung Foundry LN08LPP/LPU
Why using Single Root I/O Virtualization (SR-IOV) can help improve I/O performance and Reduce Costs
By Philippe Legros, Product Design Architect, PLDA
Introduction
While server virtualization is being widely deployed in an effort to reduce costs and optimize data center resource usage, an additional key area where virtualization has an opportunity to shine is in the area of I/O performance and its role in enabling more efficient application execution. The advent of the Single Root I/O Virtualization (SR-IOV) by the PCI-SIG organization provides a step forward in making it easier to implement virtualization within the PCI bus itself. SR-IOV provides additional definitions to the PCI Express® (PCIe®) specification to enable multiple Virtual Machines (VMs) to share PCI hardware resources.
Using virtualization provides several important benefits to system designers:
- It makes it possible to run a large number of virtual machines per server, which reduces the need for hardware and the resultant costs of space and power required by hardware devices
- It creates the ability to start or stop and add or remove servers independently, increasing flexibility and scalability
- It adds the capability to run different operating systems on the same host machine, again reducing the need for discreet hardware
In this paper, we will explore why designing systems that have been natively built on SR-IOV-enabled hardware may be the most cost-effective way to improve I/O performance and how to easily implement SR-IOV in PCIe devices.
Traditional Virtualized System Overview
A virtualized system (Figure 1) is a discreet system which contains:
- Several virtual machines (VM)
- A supervisor, also referred to as the Virtual Machine Manager (VMM)
- PCI Express hierarchy
Within this virtualized system, the Supervisor plays a crucial role; it provides the interface between the hardware and the virtual machines, it is responsible for security, and ensures that there is no possible interaction between virtual machines.
Any system can be virtualized without specific SR-IOV technology. In a more traditional virtualization scenario, the Supervisor must emulate virtual devices and perform resource sharing on their behalf by instantiating a virtual Ethernet controller for each virtual machine (Figure 2). This creates an I/O bottleneck and often results in poor performance. In addition, it creates a tradeoff between the number of virtual machines a physical server can realistically support and the system’s I/O performance. Adding more VMs can aggravate the bottleneck.
Providing a Better Way – The Benefits of SR-IOV Hardware Implementation
Designing systems with hardware that incorporates SR-IOV support allows virtual devices to be implemented in hardware and enables resource sharing to be handled by a PCI Express® device such as an Ethernet controller (Figure 3).
The benefit of using SR-IOV over more traditional network virtualization is that in SR-IOV virtualization, the VM is talking directly to the network adapter through Direct Memory Access (DMA). Using DMA allows the VM to bypass virtualization transports such as the VM Bus and avoids requiring any processing in the management partition. By avoiding the use of a switch, the best possible system performance is attained, providing close to “bare-metal” performance.
Implementing SR-IOV in an Adapter Card
The SR-IOV specification enables a hardware provider to modify their PCI card to define itself as independent devices to a VMM. To achieve this, the SR-IOV architecture distinguishes two types of functions (Figure 4):
Physical Functions (PFs):
Physical Functions (PFs) are full-featured PCIe functions; they are discovered, managed, and manipulated like any other PCIe device and PFs have full configuration space. It is possible to configure or control the PCIe device via the PF and in turn, the PF has the complete ability to move data in and out of the device. Each PCI Express device can have from one (1) and up to eight (8) physical PFs. Each PF is independent and is seen by software as a separate PCI Express device, which allows several devices in the same chip and makes software development easier and less costly.
Virtual Functions (VFs):
Virtual Functions (VFs) are ‘lightweight’ PCIe functions designed solely to move data in and out. Each VF is attached to an underlying PF and each PF can have from zero (0) to one (1) or more VFs. In addition, PFs within the same device can each have a different number of VFs. While VFs, are similar to PFs, they intentionally have a reduced configuration space because they inherit most of their settings from their PF.
In order to effectively implement SR-IOV, it is necessary to access the VF configuration space.
Traditional routing allows 8 functions and the device number is always “0” in PCIe.
So alternate routing is used with the standard fields to extend the number of functions. This change allows up to 256 functions. (Figure 5)
In addition, SR-IOV extends this further by allowing the use of several consecutive bus numbers for a single device, enabling more than 256 functions.
VFs are light-weight functions and their configuration space is significantly different from PFs. Significant changes are as follows:
- Most registers are hardwired. They are set to “0”, “1” or take the same value as their PF.
- No Base Address Register is implemented
- Only a few RW or RWC registers are implemented in each VF:
- A few PCI/PCI Express “enable” and “status” bits
- MSI/MSI-X registers
- Optionally, some capability registers such as AER may be enabled
In order to access the VF memory spaces, up to 6 VF BARs are implemented in the PF SR-IOV capability. They are similar to normal BAR registers except that their settings apply to all VFs.
During initial SR-IOV set-up and initialization VFs are not enabled and are invisible. The Supervisor then detects the device and configures PFs. If the host system and device driver detect SR-IOV capability then they will:
- Configure the number of VFs
- Assign addresses to VF BARs
- Enable VFs
Once set up, each VM can be assigned a virtual device and can access it directly via its VF driver. There must be at least one VF per VM, otherwise the Supervisor will need to perform some or all sharing management, reducing the benefits of utilizing SR-IOV. The current market trend is to have 64 VMs, so a SR-IOV capable device should support 64 VFs.
Choosing the Right Partners for SR-IOV Success
Many PCI card designers are realizing that the PCI IP they choose is crucial to the success of their SR-IOV implementation. By choosing PCI IP vendors who understand and design for SR-IOV inherently, the PCI cards provide a more seamless integration with the host system.
PLDA, a long-time leader in PCIe IP innovation, provides its XpressRICH3® PCIe Gen 3 IP (Figure 6) to many of the leading PCI Express hardware vendors. PLDA’s IP provides native SR-IOV support and the PLDA IP delivers industry-leading specifications to fully enable SR-IOV functionality.
PLDA’s XpressRICH3 PCIe 3.0 IP delivers:
- The ability for each Physical Function (PF) to support up to 64 Virtual Functions (VF)
- Native support for up to 8 PFs
- Enablement of up to a total of 512 functions
- VFs share the same configuration access, status and error reporting interface as their PF
- VFs are mapped after PFs, using several bus numbers if necessary
- Application checks tl_rx_bardec to figure out which VF is receiving a packet
- Same RTL for ASIC and high-end FPGA applications
- x16, x8, x4, x2, x1 at GEN3 (8Gbps) speed
- Backward compatible to GEN2 (5Gbps) and GEN1 (2.5Gbps)
Because PLDA is the industry leader in PCI Express IP, with over 2,500 designs in working silicon, it is also an assurance of ease of integration and first-time right functionality. In addition, PLDA offers free evaluation to enable a hands-on trial of their IP before purchase and provides a comprehensive, SR-IOV Reference Design enabling quick implementation and reducing time-to-market. To schedule a demo of the XpressRICH3 IP running SR-IOV, in which the IP performs both “Read DMA” and “Write DMA”, visit PLDA at www.plda.com.
Summary:
In summary, the key benefits of using SR-IOV to achieve virtualization include:
- Enabling efficient sharing of PCIe devices, optimizing performance and capacity
- Creating hundreds of VFs associated with a single PF, extending the capacity of a device and lowering hardware costs
- Dynamic control by the PF through registers designed to turn on the SR-IOV capability, eliminating the need for time-intensive integration
- Increased performance via direct access to hardware from the virtual machine environment
PLDA’s XpressRICH3 IP for PCIe fully supports SR-IOV functionality and is an easy and cost-effective way to integrate SR-IOV support into PCI devices. For more information, please visit www.plda.com.
About the Author:
Philippe Legros is a PCI Express Product Design Architect at PLDA; he has more than 12 years experience in PCI, PCI-X and PCI-Express bus protocols and related ecosystems. Prior to joining PLDA, Philippe worked as an ASIC design engineer in digital video broadcasting. Philippe has a MSc from Middlesex University, London, UK.
About PLDA:
PLDA designs and sells intellectual property (IP) cores and prototyping tools for ASIC and FPGA that accelerate time-to-market for embedded electronic designers. They specialize in high-speed interface protocols and technologies such as PCIe and Ethernet. Visit them online at www.plda.com.
|
Related Articles
- Do the Math: Reduce Cost and Get the Right Communications System I/O Connectivity
- Improve performance and reduce power consumption in mixed-signal designs
- How to reduce power using I/O gating (CPLDs) versus sleep modes (FPGAs)
- HyperTransport: an I/O strategy for low-cost, high-performance designs
- How control electronics can help scale quantum computers
New Articles
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
- Timing Optimization Technique Using Useful Skew in 5nm Technology Node
- Streamlining SoC Design with IDS-Integrate™
- Last-Time Buy Notifications For Your ASICs? How To Make the Most of It
Most Popular
- Advanced Packaging and Chiplets Can Be for Everyone
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Timing Optimization Technique Using Useful Skew in 5nm Technology Node
- Streamlining SoC Design with IDS-Integrate™
- System Verilog Assertions Simplified
E-mail This Article | Printer-Friendly Page |