Based on the recent news that Intel has announced an 80-GB Solid State Disk for less than $600, the end for the mechanical drive may get here within the next five years.Think about it, a 15k 300-GB Fibre drive costs a little more than $400. While the 80-GB SSD from Intel isn't what I would call enterprise class, and certainly there is a delta in capacity, it is more than reasonable to think that the gap between solid state and mechanical drives will close relatively fast. Also in Tier One storage, the gap doesn't need to close completely -- if you subscribe to the conventional wisdom that 80% of your data is inactive, you only need enough SSD capacity to hold that 20% of most active data. When viewed from a watt-per-performance perspective, SSD also is greener. With mechanical drives you many times have to buy extra drive count to get the performance that you need. This isn't the case with SSD. There is some maturing that needs to happen. SSD technology is 30X or more faster than the current state of the art in mechanical drive performance. This means that the current storage drive shelves and controllers need to be optimized, if not totally redesigned, for this zero latency environment. The current practice of storage manufacturers to plug SSD modules into their existing drive shelves is a short-term workaround to get SSD to the masses. Eventually these manufacturers will need to follow the model of companies like Texas Memory Systems, Solid Data, and Violin Memory that have built systems from the ground up to be optimized for the zero-latency environment. Secondly, there needs to be intelligence at the storage controller level to move data in and out of the SSD area. Right now, SSD is expensive enough that most customers know exactly what files or components of a database they want to put on SSD. As SSD becomes less expensive and its capacity larger, its use will broaden and then the need to automate data in and out of the SSD Tier will become more critical. This will continue at least until SSD becomes so inexpensive that all your Tier One storage is SSD in some form. Even when we get to that point, maybe in the next five years, there will need to be some intelligence to move this data to Tier 3 Archive storage. This move will likely not be controller driven and will be done either by a global file system or a specific but simple software data mover. From a time line perspective, I would expect SSD to continue to be application or even file specific for the next 18 months, although the number of applications that utilize it will grow. I don't expect to see wild growth, as some research firms have predicted. Then in the next two to four years I would expect to see a broader application of SSD across ever-growing chunks of Tier One storage with some sort of automated data movement in and out of the SSD areas. Then, finally, within the next five years I would expect most data centers to begin to move toward a two-tier strategy that are polar opposites of each other, SSD and Archive, with nothing in between. Finally, don't think that once we get everything in Tier One over to SSD your performance problems will be solved. Initially, there will be a lot of time spent addressing latency issues that SSD exposes. For example, who thought we would be complaining that drive shelves aren't fast enough? Once the SSD-exposed latency issues are resolved, there will be complaints that SSD itself is not fast enough, and then we will have a whole new tiering system for SSD drives.
Track us on Twitter: http://twitter.com/storageswiss.
Subscribe to our RSS feed.
George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.