Having things burst in the data center does not seem like a very good idea but the term really applies to allowing components of the data center to expand on the fly when there is a peak load and then contract when it has passed. The value of bursting is that it will allow you not to have to design infrastructures for the norm not the worst case, saving capital.

George Crump, President, Storage Switzerland

August 13, 2010

3 Min Read

Having things burst in the data center does not seem like a very good idea but the term really applies to allowing components of the data center to expand on the fly when there is a peak load and then contract when it has passed. The value of bursting is that it will allow you not to have to design infrastructures for the norm not the worst case, saving capital.The problem is that when you design any component within a data center to handle a peak load you end up wasting a lot of resources and capital since it sits idle most of the time. The other problem with designing for the peak is that it requires a prediction of the future that simply may not be realistic. Especially with server virtualization it is very hard to know what the peak needs of an application or server are going to be until it is fully in production and even then it's hard to know how those needs will change. The conventional path around this is to build the infrastructure well beyond whatever the peak load may be, wasting even more resources and capital.

Bursting solves these problems by having a few spare resources available either within the data center or externally that can be tapped into when a peak load occurs. We already see the basic underpinnings of this as server virtualization is deployed. Virtual machines can be migrated either manually or automatically as load on specific physical servers. There are solutions that can power up additional physical servers as needed, allowing power savings until the spare CPU resources are needed.

Migration can also be external as we describe in our article "Cloud Bursting With Distance VMotion". An infrastructure can be designed where virtual machines can be live migrated to entirely different data centers when peak resource demands are reached. In fact this does not even need to be within your data center. This can be a live migration to a cloud compute provider that has a virtualized environment ready for you to migrate to.

Internally technologies like I/O virtualization (IOV) can be leveraged to provide virtual access to additional I/O capacity when a server becomes burdened down with I/O requirements. As we describe in our article "What is I/O Virtualization?" IOV typically uses a gateway device to provide servers with shared access to multiple I/O cards. One of those cards could essentially be a spare that is available when a server needs additional I/O capability. One spare card is economically more attractive then placing an extra network and storage HBA in every server.

Most of the progress on bursting has been by either leveraging server virtualization or extending the intelligence in the infrastructure. What is lacking is some sort of bursting capability with storage. Certainly data can be offloaded to a cloud storage service but what if you have a temporal need for high speed primary storage? There are a few cloud storage providers that are placing metro-accessible storage pods that will provide this capability but for the most part you are limited to having to own the storage. IOV can provide some of the solutions by allowing a solid state or high speed mechanical storage device to be quickly reassigned to a new host.

Bursting is good news for the data center. The advantage that designing for the norm has is that the norm is a known. There is no guess work, the I/O profile is exactly what is happening at that moment in the data center. Designing for the norm not only allows you to be accurate it also allows you to save capital. Bursting allows you to design for the norm without the risk of peak loads crippling your applications.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

About the Author(s)

George Crump

President, Storage Switzerland

George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, and SAN. Prior to founding Storage Switzerland, he was CTO at one the nation’s largest storage integrators, where he was in charge of technology testing, integration, and product selection. George is responsible for the storage blog on InformationWeek's website and is a regular contributor to publications such as Byte and Switch, SearchStorage, eWeek, SearchServerVirtualizaiton, and SearchDataBackup.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights