As we discussed in our white paper "Visualizing SSD Readiness" many applications today can deliver enough IO requests to the storage system that they can support drive array groups with very large spindle counts. The larger the number of mechanical spindles in an array the better it typically will perform. The problem with higher drive count is, well, higher drive count. All of these drives need to be ordered, shelved, racked, managed and powered. All of that costs money. Your application may be able to sustain 300 drives, but can your budget?
There is also a capacity utilization issue as well. While you have an application that can support 300 mechanical drives, at today's drive capacities that can lead to TB's of unused space. Its true that with a SAN you can put other applications on those spindles as well but doing so could effect the performance of the key application and the other applications more than likely have no need for the high performance storage you had to buy for the original application.
What makes more sense is targeting the capacity where its needed, an almost application specific storage tier. In block IO this is typically done by adding SSD to an existing storage system or taking an even more targeted approach and utilizing SSD specific systems from companies like Texas Memory or Violin Memory. In fact Fusion-io and Texas Memory can take this application specificity even further by leveraging one of their PCI-E based cards and put the IOPS right in the application server itself.
One of the challenges is deciding how and when to get the right data sets on SSD. Often this is a specific application or data set like a hot Oracle table and moving that table to an SSD is a straightforward process. For more broad use of SSD, especially in NAS storage, the challenge becomes greater. One solution is to use a storage virtualization appliance from companies like NetApp or DataCore to unify the management of the SSD.
In NAS IO a viable alternative is beginning to be presented by companies like Storspeed. These companies are creating an evolved form of cache technologies to intelligently move data in and out of tiered storage. This allows SSD to be used to its maximum capacity and yet always have the most active data set on that high speed tier. As well as to give the user specifc control over what applications or data leverage that tier.
Regardless of the approach, leveraging SSD, especially with its increasing affordability, is an excellent way to reduce CAPEX expenditures and one that can have an immediate payoff.
Track us on Twitter: http://twitter.com/storageswiss
Subscribe to our RSS feed.
George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.