Ron DiGiuseppe, Synopsys
EDN (November 20, 2013)
Global Internet traffic is forecasted to grow threefold from 2012 to 2017 according to Cisco’s 2013 Visual Networking Index (VNI)1. This growth, mainly driven by residential, business, and mobile users, is spurring significant technology innovations for modern data centers.
The move toward cloud computing, using thin client devices to reliably deliver services for applications such as Pandora, Twitter, Facebook, and Google is creating new service business models that will cause cloud IP traffic to grow by 35 percent through 20172. These business models are enabled by a $100 million cloud services ecosystem including software-as-a-service (SaaS), platform-as-a-service (PaaS), infrastructure-as-a-service (IaaS), and others.
With the addition of the latest Internet of Things applications such as smart appliances, industrial automation, connected cars, and consumer wearable devices, the number of networked devices is expected to grow to 19 billion in 2017, according to the Cisco VNI forecasts. These industry trends put significant demands on large cloud data centers to improve efficiency and reduce complexity, space, cost, and power. To address these requirements, cloud and mega data center operators are re-architecting data center networking and compute architectures in a couple of ways. The first is to simplify the data center network through software defined networking (SDN). The second is to lower power through the use of micro servers. These new technologies are requiring architects to reconsider the design criteria for IP to develop the next generation of SoCs for data center applications.
A recent survey of large companies in North America found the average data center power usage effectiveness (PUE) is 2.92. This means for every watt of power dissipated in data center servers, 2.9 watts are dissipated in cooling and power distribution. You can see why data center operators have an incentive to reduce server power in order to significantly minimize their operating expenses.
Another challenge that mega data center operators contend with is the cost and complexity of managing the networks, especially in terms of provisioning the racks and clusters, as well as scaling network capacity. Data center operators need a quick, efficient process to provision network bandwidth based upon their business needs. You can imagine how data center network demand fluctuates during large media events such as the Super Bowl.
Previously, data center operators required over one week to install new line cards and switches to increase bandwidth within a data center cluster. Today, operators utilize on-demand provisioning, similar to server virtualization, to allocate virtual machines automatically in a matter of minutes. Automating and simplifying the management of the data center network is a major industry trend and primary benefit of SDN architectures. A software defined network decouples the control and data planes so that network intelligence and the state are logically centralized on SDN network controllers and the underlying network infrastructure is abstracted from the applications managing the network (Figure 1). New protocols such as OpenFlow, which structures communication between SDN control and data planes, is being standardized by the Open Network Foundation (ONF) to simplify the management of traffic across multiple vendors’ network devices within mega data centers.
The ONF members consist of major carrier and data center operators including Facebook, Google, Microsoft, Verizon, and Amazon, as well as system suppliers to those operators such as Cisco, Dell, Fujitsu, and HP. The participation of major semiconductor ASSP suppliers including Broadcom, Freescale, LSI, Marvell, TI, Netronome, and others within ONF are creating a new class of SoCs for data center applications.
Click here to read more ...