As these capabilities mature and are understood, more will be asked of them. In the past if you used snapshots for example, it was enough to have one or two snapshots and use those snapshots just to recover a corrupted volume. Now it is not uncommon for snapshots of a volume to number in the high double digits, and for you to expect to not only recover from that volume but to also mount it in a read/write fashion so you can leverage it for testing. This consumes storage compute resources.
Another example that is maturing right in front of us is thin provisioning. As we outline in Thin Provisioning Basics, thin provisioning is the capability to provision storage in advance but only consume disk space when it is physically needed. However with increased use, we are learning that is not enough. Thin provisioning systems have to become more aware of the data they store. They need to differentiate between what is actual data and what is data that has been marked for deletion. Doing so allows for the migration of fat volumes into a thin provisioned system and the maintaining of thin volumes as they are used and data is deleted from them. Again calculating and managing all of this provisioning consumes storage compute resources.
The combination of large virtual infrastructures with these advanced capabilities means that more is being asked from the storage infrastructure. Now more than ever you may hit the upper edge of performance of a storage system long before you finish populating it with drives. Certainly the Intel wave may catch up with the performance demand, but realize that these capabilities are just the beginning. On the plate next is embedded data migration between storage tiers, deduplication, compression and of course being able to maintain performance across more than a few SSD drives inserted into the storage system. In addition there are probably a host of other capabilities that we are yet unaware of.
To maintain performance in these large scale environments there is going to be an increased demand for clustered storage and specialized hardware. These demands for data services are going to require a scale out architecture with potentially specialized engines for off-loading some of these tasks. Doing so will allow storage systems to maintain maximum performance and deliver full capacity.
Track us on Twitter: http://twitter.com/storageswiss
Subscribe to our RSS feed.
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.