Perimeter
7/12/2011
11:26 AM
Adrian Lane
Adrian Lane
Commentary
Connect Directly
RSS
E-Mail
50%
50%

Federated Data And Security

'Data virtualization' is a misnomer -- it's 'federated data.' Here's why it's important

Forrester recently published a research report, titled "Data Virtualization Reaches Critical Mass," to communicate data management trends -- and it has some important implications for data security.

I'll say up-front that "data virtualization" is a terrible name for the market being described, that database "consolidation" is not a trend I am seeing, and extraction-transformation-load (ETL) is not causing any more data quality problems than it did a decade ago. Still, the report contains some good information, and I generally agree with many of the conclusions about where the market is heading.

There are critical changes coming to the way we consume data. Some of this is driven by the way we collect information, and some is driven by changes to the infrastructure (virtualization and cloud technologies). I think the key insight here is that data federation capabilities are evolving to meet demand, and that data management tools will need to change as well. In this post, I want to discuss what this means in terms of data security.

But first, let's get some terminology straight because there are a couple definitions floating around: This market is actually data federation. The data is not virtual -- it's real. We are not pretending to retain the original data format; rather, we are combining all formats and hiding the details from the consumer of information. The data can be stored, or it can be dynamically acquired. The source and format of the data is variable; the value proposition is to be able to bring disparate systems together and consume data regardless of the underlying format. Virtualization is a sexier term than federation, which is why vendors would choose to use it, but federation is what's going on here.

What does this have to do with database security? The trend is this: The concept of a "database" is reverting to the nonrelational meaning of any container of data. Applications no longer care whether data comes from a relational database, a nonrelational database, the results of a BI system query, Web site scraping, a Google search, an XML stream, the current geolocations of mobile users, or pretty much any data source. The real trend is for applications to be able to access and analyze different sources regardless of the form data takes.

What's important here is to understand that federated data systems take care of the mapping of these data sources seamlessly for you, behind the scenes. And it's done by having access to the metadata that interprets the data structure and type on-the-fly, so applications can use data regardless of source. The technology works dynamically like a database abstraction layer (e.g., Hibernate) or as a data transformation function (i.e., ETL). Note that today there are not many providers, with only a handful of data integration providers, relational database vendors, platform-as-a-service vendors, and custom applications.

For those of you who are familiar with SQL injection attacks, you know that they are possible when we don't validate input variables. One of the issues with federating data from multiple sources is validating the application that sends us data, as well as the data itself. Given that speed of processing is the typical measure of success, data validation capabilities are underserved. Much like drive-by malware, if you don't validate data coming from different sources, you're likely to receive bad data or malicious content. XML schema and data validation tools deal with complex data types. The ability to "mask" data streams quickly becomes a critical requirement -- both for hiding sensitive data, as well as filtering bad content -- when moving data between production platforms, or from production to nonsecured test environments. Before data is exposed to federation, you need to know whether there is sensitive information present and what to do with it.

As the Forrester report indicates, datadiscovery tools will need to adapt to deal with different data sources. I anticipate that database activity monitoring will need to include both file activity monitoring, as well as DLP-like analysis capabilities in this type of environment.

Undoubtedly, this change is coming, but it creates new security challenges. The producer-consumer data model creates new trust issues, and existing data and database security tools that rely on format will need to evolve. Relational database vendors and masking vendors both offer tools in existing products to help, but they will need to evolve, as well.

Adrian Lane is an analyst/CTO with Securosis LLC, an independent security consulting practice. Special to Dark Reading. Adrian Lane is a Security Strategist and brings over 25 years of industry experience to the Securosis team, much of it at the executive level. Adrian specializes in database security, data security, and secure software development. With experience at Ingres, Oracle, and ... View Full Bio

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Cartoon
Current Issue
Dark Reading Must Reads - September 25, 2014
Dark Reading's new Must Reads is a compendium of our best recent coverage of identity and access management. Learn about access control in the age of HTML5, how to improve authentication, why Active Directory is dead, and more.
Flash Poll
Title Partner’s Role in Perimeter Security
Title Partner’s Role in Perimeter Security
Considering how prevalent third-party attacks are, we need to ask hard questions about how partners and suppliers are safeguarding systems and data.
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2012-5485
Published: 2014-09-30
registerConfiglet.py in Plone before 4.2.3 and 4.3 before beta 1 allows remote attackers to execute Python code via unspecified vectors, related to the admin interface.

CVE-2012-5486
Published: 2014-09-30
ZPublisher.HTTPRequest._scrubHeader in Zope 2 before 2.13.19, as used in Plone before 4.3 beta 1, allows remote attackers to inject arbitrary HTTP headers via a linefeed (LF) character.

CVE-2012-5487
Published: 2014-09-30
The sandbox whitelisting function (allowmodule.py) in Plone before 4.2.3 and 4.3 before beta 1 allows remote authenticated users with certain privileges to bypass the Python sandbox restriction and execute arbitrary Python code via vectors related to importing.

CVE-2012-5488
Published: 2014-09-30
python_scripts.py in Plone before 4.2.3 and 4.3 before beta 1 allows remote attackers to execute Python code via a crafted URL, related to createObject.

CVE-2012-5489
Published: 2014-09-30
The App.Undo.UndoSupport.get_request_var_or_attr function in Zope before 2.12.21 and 3.13.x before 2.13.11, as used in Plone before 4.2.3 and 4.3 before beta 1, allows remote authenticated users to gain access to restricted attributes via unspecified vectors.

Best of the Web
Dark Reading Radio
Archived Dark Reading Radio
In our next Dark Reading Radio broadcast, we’ll take a close look at some of the latest research and practices in application security.