Which SSD Integration Method Is Best

As we continue <a href="http://www.informationweek.com/blog/main/archives/2010/08/what_solid_stat.html">our series</a> on determining the best solid state storage system makes the most sense for your environment, another area to discuss is what integration method is best. In other words once the solid state storage is installed how will you get data to it and from it?

George Crump, President, Storage Switzerland

October 6, 2010

4 Min Read

As we continue our series on determining the best solid state storage system makes the most sense for your environment, another area to discuss is what integration method is best. In other words once the solid state storage is installed how will you get data to it and from it?The integration choices include the manual placement, that we discussed in our last entry; caching, automated tiering or using solid state only devices. Each of these bring different capabilities to your storage environment and which one you should use depends on what problem or problems you need to solve as well as where you are in the storage purchasing lifecycle.

Caching may be the simplest way to provide wide spread use of solid state disk. It works similar to the cache that is probably already on your current array. It is just a lot larger. Most of the caching systems use flash memory for the cached data and can store the most frequently accessed data. Because of their size when compared to previous cache methods, the amount of data that can be in cache is significantly higher and the chances of a cache hit go up significantly. There are caching solutions designed to cache block (SAN) storage or file systems (NAS) storage. We have yet to see one appliance be able to handle both and thus far caching NAS caching seems to be the most popular. Most of these systems bring solid state performance to the environment, without any change to applications. This means integration is relatively seamless.

Automated tiering attempts to provide a balance between caching and manually moving the data to a solid state storage tier. Intelligence is added to the environment that monitors data at a sub-volume level and will promote or demote data based on its access pattern. Automated tiering brings value, especially when trying to integrate solid state disk into a legacy storage system.

Automated tiering is not perfect. There are the issues mixing legacy mechanical storage and solid state storage that we talked about in our last entry. Beyond the physical hardware challenges there is also some concern about the software that performs this inspection. What is the storage processor cost to monitor these data patterns and what is the performance impact of moving data in and out of the solid state storage tier? Another software concern is how intelligent is that inspection process? It has to be based on more that just last accessed date, just because a file is read often does not mean it needs to be on the solid state tier. Despite these concerns, automated tiering has been very successful for some vendors and they have been able to work around the potential bottlenecks that the data movement may cause. It is a good solution for someone looking to implement a small SSD pool across a broad range of servers and use cases.

The pure solid state SAN may be the simplest, greenest and least complicated to integrate of any of the integration choices. Mostly because there is no integration. It is a direct replacement for the legacy storage system that is in place today. While cost is always a concern look for solid state only providers to provide space optimization capabilities in the form of compression or deduplication. Optimization will take away from some of the performance, we have felt for a while that solid state has I/O to spare for many environments. A pure solid state disk system should be considered if it is time for a storage refresh or upgrade. If a pure solid state approach can be cost justified then it eliminates many of the management variables that storage managers have to deal with.

Finally, a manual approach that we discussed in the last entry can't be ignored, sometimes that is the most direct way to solve the problem. The point here is which way you choose really depends on the nature of your specific environment at the moment in time you consider solid state disk. We have tried to provide you with some guidelines to make that decision throughout this series.

What Solid State Storage Form Factor Is Best?

What Solid State Storage Form Factor Is Best? Part II

What Solid State Storage Form Factor Is Best? Part III

Which Solid State Disk Is Best? Part IV

What Solid State Form Factor Is Best - Integration

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

Read more about:

2010

About the Author(s)

George Crump

President, Storage Switzerland

George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, and SAN. Prior to founding Storage Switzerland, he was CTO at one the nation’s largest storage integrators, where he was in charge of technology testing, integration, and product selection. George is responsible for the storage blog on InformationWeek's website and is a regular contributor to publications such as Byte and Switch, SearchStorage, eWeek, SearchServerVirtualizaiton, and SearchDataBackup.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights