Graeme Caldwell, InterWorx
Data Center Knowledge (December 6th, 2013)
If you’d asked anyone from the last four decades with even a modicum of tech knowledge to describe a server, they’d have given you the same basic response. They would have described a box containing discrete components, including a processor module, memory, various controllers, and the buses that connect them. That’s been the model on which servers have been built for decades and its a model that has shaped the way data centers are built.
It’s not a model that is infinitely scalable. We live in a data-centric world. The quantities of data the worldwide data infrastructure has to process, store and transmit is growing rapidly. Faster processing and higher bandwidth has given the world a glimpse of the potential that “big data” has to change our lives. Everything from social media and search engines to disaster planning and the nascent “Internet of Things” will continue to push at the limitations of our available infrastructure, creating an outward pressure that incentivizes the building of ever more and ever larger data centers.
However, because of the inherent limitations engendered by the x86 architecture, the servers built around that architecture, and the data centers constructed to house and support those servers, there is also an inward pressure that incentivizes a radical change to the way we think about building servers and architecting data centers. Data is currently too expensive to manage, both in terms of infrastructure investment and power consumption. In addition to expanding the number of data centers, we also need to focus on making those data centers as efficient as possible.
Click here to read more ...