News

8/27/2009
11:49 AM
George Crump
George Crump
Commentary
50%
50%

Making Data An Asset

Data is often looked at as a liability; something that has to be stored, protected and preserved. Data storage has led to massively expanding storage environments and such initiatives as archive. Protection has led to incredibly elaborate backup and recovery schemes and preservation has led to eDiscovery and compliance. All of these processes are reactive, how can the view of data be changed to proactive, to using data as an asset?

Data is often looked at as a liability; something that has to be stored, protected and preserved. Data storage has led to massively expanding storage environments and such initiatives as archive. Protection has led to incredibly elaborate backup and recovery schemes and preservation has led to eDiscovery and compliance. All of these processes are reactive, how can the view of data be changed to proactive, to using data as an asset?The first step is to build on the preservation of the asset solutions and to broaden their scope. Preservation of data often involves some sort of eDiscovery component. These typically provide context based indexing and classification of data or a subset of data. Applying this type of technology to all your data could provide you with the ability to know not only where your data is but also what it contains. This then builds the foundation so that when a research request comes up, being able to find that information based on content and doing so in an instant is a key component into turning data into an asset.

Companies like Kazeon and Index Engines have built a good business in the litigation readiness space. Requiring the indexing of a smaller subset of data that you think there is a likelihood of a discovery request being generated against. What if some enterprise strength was added to these solutions so their use became more mainstream across all the data in the enterprise?

Part of such a solution will mean not requiring a never ending array of appliances to chew through more and more data. Ideally an IT manager wants the ability to plug in one box and index the enterprise in relatively quick order. Telling an IT manager that he needs 10 or 20 indexing appliances to index his enterprise is not going to be popular. The potential value in knowing exactly what is in the enterprise will be overshadowed by the implementation complexity and management of 10 to 20 additional pieces of indexing appliances. More efficient indexing will lead to simplified implementation and management that will lead to faster adoption.

More robust indexing is going to come from better algorithms and doing special application aware file examination to get through the data faster. For example Index Engines has done specific work around Microsoft Exchange that enables a much faster indexing timeframe on those stores.

To make data an asset eDiscovery has to expand into Enterprise Discovery. Mainstreaming data discovery across the enterprise will mean fewer indexing appliances that are application aware. As enterprise strength is added to what began as eDiscovery IT professionals can change the view of stored data from a liability to an asset, changing data from a cost center to an investment.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

Comment  | 
Print  | 
More Insights
Comments
Threaded  |  Newest First  |  Oldest First
More Than Half of Users Reuse Passwords
Curtis Franklin Jr., Senior Editor at Dark Reading,  5/24/2018
Is Threat Intelligence Garbage?
Chris McDaniels, Chief Information Security Officer of Mosaic451,  5/23/2018
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: This comment is waiting for review by our moderators.
Current Issue
Flash Poll
[Strategic Security Report] Navigating the Threat Intelligence Maze
[Strategic Security Report] Navigating the Threat Intelligence Maze
Most enterprises are using threat intel services, but many are still figuring out how to use the data they're collecting. In this Dark Reading survey we give you a look at what they're doing today - and where they hope to go.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2018-11506
PUBLISHED: 2018-05-28
The sr_do_ioctl function in drivers/scsi/sr_ioctl.c in the Linux kernel through 4.16.12 allows local users to cause a denial of service (stack-based buffer overflow) or possibly have unspecified other impact because sense buffers have different sizes at the CDROM layer and the SCSI layer.
CVE-2018-11507
PUBLISHED: 2018-05-28
An issue was discovered in Free Lossless Image Format (FLIF) 0.3. An attacker can trigger a long loop in image_load_pnm in image/image-pnm.cpp.
CVE-2018-11505
PUBLISHED: 2018-05-26
The Werewolf Online application 0.8.8 for Android allows attackers to discover the Firebase token by reading logcat output.
CVE-2018-6409
PUBLISHED: 2018-05-26
An issue was discovered in Appnitro MachForm before 4.2.3. The module in charge of serving stored files gets the path from the database. Modifying the name of the file to serve on the corresponding ap_form table leads to a path traversal vulnerability via the download.php q parameter.
CVE-2018-6410
PUBLISHED: 2018-05-26
An issue was discovered in Appnitro MachForm before 4.2.3. There is a download.php SQL injection via the q parameter.