Automated tiering, the transparent movement of data based on activity or type, is quickly proving itself to be a hot consideration for storage managers but why stop at automated tiering? Can't we make the entire storage ecosystem respond automatically based on environmental conditions and its available resources?
Automated tiering, the transparent movement of data based on activity or type, is quickly proving itself to be a hot consideration for storage managers but why stop at automated tiering? Can't we make the entire storage ecosystem respond automatically based on environmental conditions and its available resources?Driven in large part by storage companies and storage managers trying to decide how to best take advantage of Solid State Disk (SSD), automated tiering solutions are trying to automate the movement of hot data. EMC for example this week released 1.0 of its FAST (Fully Automated Storage Tiering). Howard Marks gives a great summary over on Network Computing. Automated tiering is not new. Compellent, 3PAR, Dataram and FalconStor have been doing something similar for a while on block storage. We have also seen companies like Storspeed and Avere offer similar solutions on NAS based systems.
Again, why stop at tiering? Data protection decisions could be automated in much the same way. Here the industry could learn from the Data Robotics Drobo which can transparently adjust protection levels based on available capacities. Enterprise storage systems in the same manor should be able to respond to the insertion of any amount of storage, classify that storage and decide how that storage can allow the current data protection method to improve. If you have enough capacity why not mirror everything initially, then downgrade to RAID 6 and then RAID 5 as capacity becomes more scarce? Of course you would want some notification or warning from the system that it is going to make these changes, but why should storage administrators have to waste time making them?
Along the same lines if you implement a second system that has spare capacity on it, why not have the primary system automatically start performing continuous data protect (CDP) of its most active volumes to the spare capacity on the secondary system? Further if they find another one of themselves on the network, maybe have them perform WAN replication. Some of today's storage systems are essentially running on a Linux or Windows core. Why not have those systems be able to do a image dump of data to a connected tape or deduplication system?
There are downsides that need to be worked through with this level of automation, and there are going to be storage guys like me that want to have the ability to tune and tweak. For a growing number of IT professionals however, there is simply too much data to try to manage it all. The thought of a Drobo like black box for the enterprise that automatically understands the storage demands of environment and then provides the best performance and reliability based on its available resources could have strong appeal.
Track us on Twitter: http://twitter.com/storageswiss
Subscribe to our RSS feed.
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.
About the Author(s)
You May Also Like
Guarding the Cloud: Top 5 Cloud Security Hacks and How You Can Avoid Them
April 4, 2024Cybersecurity Strategies for Small and Med Sized Businesses
April 11, 2024Defending Against Today's Threat Landscape with MDR
April 18, 2024Securing Code in the Age of AI
April 24, 2024
Black Hat USA - August 3-8 - Learn More
August 3, 2024Cybersecurity's Hottest New Technologies: What You Need To Know
March 21, 2024Black Hat Asia - April 16-19 - Learn More
April 16, 2024