Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Operational Security //

Compliance

3/22/2018
09:35 AM
Paige Bartley
Paige Bartley
News Analysis-Security Now
50%
50%

GDPR Compliance: Enterprises Have Two Options to Consider

When it comes to preparing for GDPR, enterprises, as well as vendors, are relying on two different approaches. The first focuses on technology, while the second relies on internal processes and workflows.

A sizable proportion of organizations will not be fully compliant with the European Union's General Data Protection Regulation by the time the May 2018 deadline passes, and demand for compliance tools is growing.

Technology vendors, eager to carve out a piece of this burgeoning market, are offering a diverse swath of solutions that tackle various aspects of the broad regulation. However, two primary approaches are emerging in the solution market: those that depend more on technology and those that depend more on existing organizational processes and workflows.

The better approach is a matter of debate, but GDPR's framework suggests that technology in isolation -- without respect to underlying people and processes -- is unlikely to provide sustainable results. The most successful GDPR compliance solutions are likely those that are able to successfully combine aspects of both technology and human process, helping operationalize data control and compliance workflows within the organization. (See GDPR Non-Compliance: Will Your Enterprise Get Busted?)

GDPR's technology-agnostic framework underscores process
As a rule, GDPR is technology-agnostic. Aside from a few references to standard security measures such as encryption and high availability of systems, the regulation makes scant mention of specific technology.

There is good reason for this: Technology evolves much more quickly than regulatory and legal frameworks. If the regulation were to endorse or depend on the viability of specific technologies, it would quickly become obsolescent and unable to adequately fulfill its role of protecting the information and rights of data subjects.

Nevertheless, technology will be a critical component to fulfilling GDPR's requirements.

After all, the regulation pertains to the protection of data, and data is stored and processed in technology-based systems. Solutions that aim to fulfill the technical requirements of the regulation need to be based on technology and/or directly interface with existing IT systems.

However, GDPR itself is more concerned with the repeatable governance processes and frameworks that exist within organizations; for all the regulation's technical requirements, such as security of data, right to erasure, right to data portability and data protection by design and by default, there are many more articles of the regulation that focus on the human process. (See GDPR Blackmail Looms as a Double-Dip Cyber Attack Plan.)

Data protection impact assessments (DPIAs), prior consultation and communication of data breaches to data subjects and supervisory authorities are all examples of requirements that necessitate repeatable, documentable processes driven by established human roles and responsibilities.

Technology cannot replace that.

In reality, compliance with GDPR requires two major components: direct technical control of data assets and the existence and documentation of repeatable human processes.

Neither can exist in isolation. While this may seem like a distinction between "hard" and "soft" requirements, software solutions provide technical means for achieving both needs. The solutions' approaches, however, are often divergent.

Two camps emerge
Given this mix of needs, the landscape of vendors offering GDPR-related solutions is largely evolving into two camps: those that take a technology-based approach and those that take a process-based approach.

Both methodologies depend on software to typically provide a centralized interface for task management and human interaction with data, but they tend to differ in their objectives and execution.

While broad generalizations are not entirely useful, as some products use an overlapping approach, the general distinction is as follows:

    • A technology-based approach depends on technology-based mechanisms to meet specific technical requirements, such as the encryption of data. Automation of data handling and data manipulation is common. These solutions are likely to assign rigid roles to product users, and typically come with their own preconfigured workflows and templates. Direct technical control of data assets is often the primary objective.

 

  • A process-based approach largely relies on existing roles, workflows and processes within the enterprise, with technology as a facilitator rather than as the primary mechanism. Manual handling of data, such as assignment of policies to data, is often required. These solutions are likely to offer flexible, customizable workflows and are likely to adopt existing roles within the enterprise rather than imposing their own within the product. Documentation and recordkeeping of processes, instead of direct data control, is often the primary objective.

 

Neither approach is right nor wrong; a technology-based solution may excel at automatically applying policies -- such as masking -- to data that has been identified as personal, whereas a process-based solution would be far better suited to Article 35's requirements for repeatedly conducting DPIAs. (See GDPR Territorial Scope: Location, Location, Location?)

Given that the regulation is so broad and encompasses a mix of technical and process requirements, an organization would benefit from using a mix of solutions that are either technology-based or process-based, depending on which articles of the regulation are being addressed.

However, technology vendors take note: Process is overrepresented in the GDPR's framework. For any compliance solution to be successful in the enterprise, it needs to piggyback off of existing human processes and roles.

Otherwise, it is likely to become siloed and underused. A solution that is striving to help achieve compliance with the broadest possible number of articles from the regulation will take advantage of both technology and human processes, utilizing each for their respective strengths.

Technology excels at automation, scale and consistency. But processes have the benefit of adaptability, human adherence and the ability to become rooted in enterprise culture.

Next page: Utilizing an Enterprise's Strengths

Previous
1 of 2
Next
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/25/2020
Hacking Yourself: Marie Moe and Pacemaker Security
Gary McGraw Ph.D., Co-founder Berryville Institute of Machine Learning,  9/21/2020
Startup Aims to Map and Track All the IT and Security Things
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/22/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-15208
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, when determining the common dimension size of two tensors, TFLite uses a `DCHECK` which is no-op outside of debug compilation modes. Since the function always returns the dimension of the first tensor, malicious attackers can ...
CVE-2020-15209
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, a crafted TFLite model can force a node to have as input a tensor backed by a `nullptr` buffer. This can be achieved by changing a buffer index in the flatbuffer serialization to convert a read-only tensor to a read-write one....
CVE-2020-15210
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, if a TFLite saved model uses the same tensor as both input and output of an operator, then, depending on the operator, we can observe a segmentation fault or just memory corruption. We have patched the issue in d58c96946b and ...
CVE-2020-15211
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices f...
CVE-2020-15212
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 2.2.1 and 2.3.1, models using segment sum can trigger writes outside of bounds of heap allocated buffers by inserting negative elements in the segment ids tensor. Users having access to `segment_ids_data` can alter `output_index` and then write to outside of `outpu...