If you stop there though your missing an important part of the picture, the roll down hill effect of primary storage deduplication. If, and that is an important if, your primary storage deduplication technology can keep the data in an optimized state throughout its entire life cycle then you can see tremendous residual value in primary storage deduplication. With primary storage deduplication snapshots, replication, clones, extra copies of data (just in case copies) all now come at near zero capacity cost. For example you can perform dumps of your database every ten minutes if you want to, deduplication will curtail the capacity growth that would normally create.
The key issue is if and when primary storage deduplication will need to "re-inflate" to a non-optimized data state. Optimization throughout the data lifecycle and the tiers of storage it is on, is critical for making deduplication make sense in primary storage. In fairness there may be a time you want to re-inflate on purpose and remove dependency on the deduplication hash table. That is going to depend on how much you trust your deduplication technology to maintain its meta-data and provide rich data integrity features.
Deduplication technology tries to fix the capacity explosion problem faced by most data centers. Where deduplication is being successful right now, in backup repositories, is trying to fix that problem after it has already occurred. Primary storage deduplication that maintains data in its optimized state fixes the problem before it becomes a problem. If properly implemented primary storage deduplication could have significant reduction on the storage demands of your data center.
Track us on Twitter: http://twitter.com/storageswiss
Subscribe to our RSS feed.
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.