If you're in the process of selecting a new primary storage system, the next step is to consider a system that has the ability to do thin provisioning and intelligent data movement. Archiving will have driven down the amount of primary storage you will need to purchase; thin provisioning will reduce that even further. Thin provisioning allows you to allocate as much storage as an application may need, but only consume that storage as it is used. According to some studies, this can result in a reduction of 70% in purchased capacity.
Regardless if you decide on a new primary storage system or not, the next step should be an inline real-time data compression device like those provided by Storwize. These devices allow for a 60%-plus reduction of NFS and CIFS mounted data with little to no performance impact. Even databases or VMware images compress well, yet maintain or even improve overall performance. The reason real-time compression is so early in the process is that its simple to implement and shows reduction across all data.
Finally there is deduplication; there are two types and, depending on your environment, they can have a big payoff for you. First is general-purpose deduplication, right now championed primarily by Network Appliance, although Riverbed has announced plans to take its WAN deduplication technology and move it into primary storage. The differences are worth a separate blog entry and one we will get into later. Ideal candidates for general-purpose deduplication are VMware images and to some extent user home directories.
Lastly, there are application-specific deduplicators, represented now by Ocarina Networks. By focusing on a particular application like ECO-System, these solutions can eliminate redundant data that might get past general-purpose deduplication tools.
For example, a photo site might have thousands of images where the pictures suffer from red-eye. They may go in and remove all those red-eyes, each image being stored as another image, and most deduplication solutions would treat this as totally unique files and each would be stored twice. An application-specific solution would identify these as similar files and only store the unique bytes that make up the corrected images. While the use case is smaller than general-purpose deduplication, the payoff can be enormous.
These solutions aren't mutually exclusive and in many cases complement each other. While this is the recommended workflow, the important part is to get started with any of theses steps and then revisit the others as time and need allow.
There is still time. Join us for our Webcast today at noon CST…. Demystifying Primary Storage Data Reduction.
Track us on Twitter: http://twitter.com/storageswiss.
Subscribe to our RSS feed.
George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.