As data deduplication matured last year, the constant question I was asked by industry analysts was "Isn't this just a feature?" The question implied that anyone that was specifically in the data deduplication space was going to be erased by the larger manufacturers as they added deduplication to their offerings. It seemed logical, but hasn't occurred. The major manufacturers have struggled putting together viable strategies for data reduction and, to some extent, it's really not in their best i

George Crump, President, Storage Switzerland

May 5, 2008

2 Min Read

As data deduplication matured last year, the constant question I was asked by industry analysts was "Isn't this just a feature?" The question implied that anyone that was specifically in the data deduplication space was going to be erased by the larger manufacturers as they added deduplication to their offerings. It seemed logical, but hasn't occurred. The major manufacturers have struggled putting together viable strategies for data reduction and, to some extent, it's really not in their best interests to reduce the amount of storage required.The biggest challenge? For data deduplication to work well, it needs to be tightly integrated into the existing operating system of the disk itself. If you have a storage array OS whose source code is 3, 4, or more years older, then integrating a dramatically new way of placing data on that disk is going to become quite complex. The work-around to this problem is to do what is commonly called a post-process deduplication step. Post-process data deduplication walks the disk at certain intervals to determine if there are redundant areas.

The challenges with this method are that it creates two storage areas to manage, an area that is waiting to be examined for duplicates and an area for the examined data. It also delays time to create a DR copy of data. A common use for deduplicated systems is to leverage their ability to only store unique data segments and replicate only those new segments to the remote location. With the post-process method, you have to wait until the deduplication step is complete, until data can be replicated. The post-process step can be very time consuming and delay the update of the DR site by 6-10 hours.

As a result, companies that started with data deduplication as a core (Data Domain,Permabit, Diligent) part of their technology have a distinct advantage. The other companies will have to make the post-process data deduplication much more seamless than it is today, exit from deduplication altogether, or re-write their code bases to support in-line data deduplication.

George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

About the Author(s)

George Crump

President, Storage Switzerland

George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, and SAN. Prior to founding Storage Switzerland, he was CTO at one the nation’s largest storage integrators, where he was in charge of technology testing, integration, and product selection. George is responsible for the storage blog on InformationWeek's website and is a regular contributor to publications such as Byte and Switch, SearchStorage, eWeek, SearchServerVirtualizaiton, and SearchDataBackup.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights