Merely identifying active data and moving it to a high performance tier is a brute force method that should eventually give way to a more intelligent method where possible. Just because data is active does not mean that it should go on the fastest and most expensive tier of storage. In most automated tiering environments the fast tier is going to be a finite repository and it may be impractical to keep all the active data on that tier. Really active but non-important data could prohibit slightly less active but important when accessed data from ever making it to the high speed tier.
What is needed is a more granular QoS capability in automated tiering systems. The ability to exclude or include data by type or location for example. Eventually these systems need to learn who the requester is. If it is from a small number of users on a relatively slow network connection, leave the data on mechanical storage. If the requester is an application or a high number of users then move the data up to the performance tier.
For the time being Solid State Disk and DRAM are finite resources. You want to make sure that you are not only putting active data on these tiers but data that can actually take advantage of the tier.
Track us on Twitter: http://twitter.com/storageswiss
Subscribe to our RSS feed.
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.