Caching may be the simplest way to provide wide spread use of solid state disk. It works similar to the cache that is probably already on your current array. It is just a lot larger. Most of the caching systems use flash memory for the cached data and can store the most frequently accessed data. Because of their size when compared to previous cache methods, the amount of data that can be in cache is significantly higher and the chances of a cache hit go up significantly. There are caching solutions designed to cache block (SAN) storage or file systems (NAS) storage. We have yet to see one appliance be able to handle both and thus far caching NAS caching seems to be the most popular. Most of these systems bring solid state performance to the environment, without any change to applications. This means integration is relatively seamless.
Automated tiering attempts to provide a balance between caching and manually moving the data to a solid state storage tier. Intelligence is added to the environment that monitors data at a sub-volume level and will promote or demote data based on its access pattern. Automated tiering brings value, especially when trying to integrate solid state disk into a legacy storage system.
Automated tiering is not perfect. There are the issues mixing legacy mechanical storage and solid state storage that we talked about in our last entry. Beyond the physical hardware challenges there is also some concern about the software that performs this inspection. What is the storage processor cost to monitor these data patterns and what is the performance impact of moving data in and out of the solid state storage tier? Another software concern is how intelligent is that inspection process? It has to be based on more that just last accessed date, just because a file is read often does not mean it needs to be on the solid state tier. Despite these concerns, automated tiering has been very successful for some vendors and they have been able to work around the potential bottlenecks that the data movement may cause. It is a good solution for someone looking to implement a small SSD pool across a broad range of servers and use cases.
The pure solid state SAN may be the simplest, greenest and least complicated to integrate of any of the integration choices. Mostly because there is no integration. It is a direct replacement for the legacy storage system that is in place today. While cost is always a concern look for solid state only providers to provide space optimization capabilities in the form of compression or deduplication. Optimization will take away from some of the performance, we have felt for a while that solid state has I/O to spare for many environments. A pure solid state disk system should be considered if it is time for a storage refresh or upgrade. If a pure solid state approach can be cost justified then it eliminates many of the management variables that storage managers have to deal with.
Finally, a manual approach that we discussed in the last entry can't be ignored, sometimes that is the most direct way to solve the problem. The point here is which way you choose really depends on the nature of your specific environment at the moment in time you consider solid state disk. We have tried to provide you with some guidelines to make that decision throughout this series.
Track us on Twitter: http://twitter.com/storageswiss
Subscribe to our RSS feed.
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.