Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Risk

12/21/2009
12:53 PM
50%
50%

4 Factors To Consider Before Firing Up That DLP Solution

There's an ugly truth that DLP vendors don't like to talk about

Data thieves target intellectual property, credit card data, and your customers' personally identifiable information. Data loss prevention products aim to identify where such information resides in your organization, and help prevent it from falling into the wrong hands. When used properly, DLP technology can be a critical part of a risk management program. But there's an ugly truth that DLP vendors don't like to talk about: Managing DLP on a large scale can drag your staff under like a concrete block tied to their ankles.

Don't believe us? Consider how far and wide a comprehensive data loss product extends its reach through your organization. User end points must run (and you must support) a new agent. E-mail and network communications will be scanned for content, and data repositories, such as file shares, databases, and PCs, will be interrogated for violations. All this technology requires policies tailored to your environment and level of risk tolerance.

And regardless of how automated you make it, DLP systems will set off alarms. The end result is more work for your administrators.

How much more work? The answer depends on the size of your organization and the level of deployment, but we've identified four areas where DLP technology will make demands on your resources.

1. Policy

Before you fire off your first scan to see just how much sensitive data is floating around the network, you'll need to create the policies that define appropriate use of corporate information. Some policies are fairly obvious--there's no good business reason for an employee to upload a spreadsheet full of Social Security numbers to his Facebook profile. But other policies will have to go through some contortions to accommodate workflows. For example, you may generally forbid employees from e-mailing customer information outside the organization--except for five people in one business unit who have to send records to seven employees at a business partner, but only on the last Friday of every month and never without sign-off from at least three lawyers from in-house counsel.

Out of the box, most DLP systems come with pre-built policies, especially for regulations such as PCI and HIPAA. However, you'll most likely need to build new policies or customize existing ones to meet your particular security needs, risk profile, and regulatory landscape. While many DLP vendors provide graphical, wizard-driven tools for this process, creating and tweaking policies is rarely a point-and-click exercise.

And once you have a set of policies written, it's unlikely they'll go unchanged for long. New compliance mandates may arise, new kinds of information will need to be protected, and new business practices may emerge. As a result, it's important to continually update your policies to ensure that they match all the requirements you have to meet for protecting sensitive data.

2. Data Discovery

Once your policies are in order, the next step is data discovery, because to properly protect your data, you must first know where it is. In midsize to large environments, you'll have at least one appliance dedicated to data discovery and content analysis. In addition, if you need to scan massive amounts of data in parallel for information that violates your appropriate use policy, you'll need to deploy additional servers for scanning. Fortunately, many top-tier systems now support scanning huge data stores with grid technology, but the fact remains that if you have many terabytes of data that you need to scan on an ongoing basis, then you're going to have to manage more servers to do it.

Then there's the issue of accuracy. Consider the challenge of identifying a simple credit card number. That number could be stored in many different formats, and it could contain variations of numbers mixed with spaces, hyphens, or other characters. Because the DLP appliance can't determine context, you'll need to programmatically describe exactly what data you're looking for, and account for all of the different formats. Often you can do this graphically using easy-to-construct Boolean logic, but sometimes you'll need a scripting language like Perl to develop advanced data description policies.

Be prepared to test the data identification capabilities you've enabled. The last thing you want is to wade through a boatload of false-positive alerts every morning because of a paranoid signature set. You also want to make sure that critical information isn't flying right past your DLP scanners because of a lax signature set. This is particularly important if you plan to use DLP technology for unstructured information, such as sensitive documents, diagrams, or source code. Also note that your signature database is going to grow and will have to be managed.

Previous
1 of 2
Next
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
US Turning Up the Heat on North Korea's Cyber Threat Operations
Jai Vijayan, Contributing Writer,  9/16/2019
MITRE Releases 2019 List of Top 25 Software Weaknesses
Kelly Sheridan, Staff Editor, Dark Reading,  9/17/2019
Preventing PTSD and Burnout for Cybersecurity Professionals
Craig Hinkley, CEO, WhiteHat Security,  9/16/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
7 Threats & Disruptive Forces Changing the Face of Cybersecurity
This Dark Reading Tech Digest gives an in-depth look at the biggest emerging threats and disruptive forces that are changing the face of cybersecurity today.
Flash Poll
The State of IT Operations and Cybersecurity Operations
The State of IT Operations and Cybersecurity Operations
Your enterprise's cyber risk may depend upon the relationship between the IT team and the security team. Heres some insight on what's working and what isn't in the data center.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-14994
PUBLISHED: 2019-09-19
The Customer Context Filter in Atlassian Jira Service Desk Server and Jira Service Desk Data Center before version 3.9.16, from version 3.10.0 before version 3.16.8, from version 4.0.0 before version 4.1.3, from version 4.2.0 before version 4.2.5, from version 4.3.0 before version 4.3.4, and version...
CVE-2019-15000
PUBLISHED: 2019-09-19
The commit diff rest endpoint in Bitbucket Server and Data Center before 5.16.10 (the fixed version for 5.16.x ), from 6.0.0 before 6.0.10 (the fixed version for 6.0.x), from 6.1.0 before 6.1.8 (the fixed version for 6.1.x), from 6.2.0 before 6.2.6 (the fixed version for 6.2.x), from 6.3.0 before 6....
CVE-2019-15001
PUBLISHED: 2019-09-19
The Jira Importers Plugin in Atlassian Jira Server and Data Cente from version with 7.0.10 before 7.6.16, from 7.7.0 before 7.13.8, from 8.1.0 before 8.1.3, from 8.2.0 before 8.2.5, from 8.3.0 before 8.3.4 and from 8.4.0 before 8.4.1 allows remote attackers with Administrator permissions to gain rem...
CVE-2019-16398
PUBLISHED: 2019-09-19
On Keeper K5 20.1.0.25 and 20.1.0.63 devices, remote code execution can occur by inserting an SD card containing a file named zskj_script_run.sh that executes a reverse shell.
CVE-2019-11779
PUBLISHED: 2019-09-19
In Eclipse Mosquitto 1.5.0 to 1.6.5 inclusive, if a malicious MQTT client sends a SUBSCRIBE packet containing a topic that consists of approximately 65400 or more '/' characters, i.e. the topic hierarchy separator, then a stack overflow will occur.