Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Endpoint

5/13/2020
02:00 PM
Chris Babel
Chris Babel
Commentary
Connect Directly
LinkedIn
RSS
E-Mail vvv
50%
50%

The Problem with Automating Data Privacy Technology

Managing complex and nuanced consumer rights requests presents a unique challenge for enterprises in today's regulated world of GDPR and CCPA. Here's why.

As privacy laws around the globe proliferate and evolve, company leaders must be careful not to approach privacy compliance as an item to check off a list. Instead of employing a reactive strategy built around ad hoc responses to new laws and amendments, organizations should deploy a more proactive approach. To do so, they'll need to develop scalable processes that ingrain privacy into how the company manages its data. A scalable approach to privacy can help conserve resources, including budget, and create a lasting competitive edge.

Privacy technology platforms on the market automate some of the processes behind a scalable privacy program. As we'll learn, the term "automation" carries a few distinct meanings, and only platforms that adhere to a specific definition can provide the proactive data enablement needed for operationalized compliance at scale. When we hear the term "automation" as it relates to technology generally, it refers to tools that enable the automatic processing of repetitive tasks. However, in the privacy compliance world specifically, "automation" often refers to the ability to connect different platforms via APIs to integrate data sources and enterprise systems.

The abilities to reduce the hours typically needed for rote tasks and connecting systems to one another are important. Unfortunately, for building data privacy compliance at scale and unlocking the value of data, these types of automation aren't enough. [Editor's note: The author is the chief executive officer of a company that provides privacy management software and services, as do several other vendors.] Consider a consumer request. The California Consumer Protection Act (CCPA) stipulates that consumers "have the right to request that a business that collects a consumer's personal information disclose to that consumer the categories and specific pieces of personal information the business has collected." Consumers may also submit a number of other requests pertaining to how companies collect and handle their data. Other privacy laws contain similar stipulations.

Automation technology can look up a consumer's name in a database, provide the controller (company) the requisite information, and help manage a request to delete data. But much of the "automation" technology for privacy is not intelligent and does not help organizations solve many of the challenges data privacy represents. For example, many technical solutions can automate repetitive tasks. Solutions that automate intelligently and in a sophisticated manner eliminate those repetitive tasks. Tasks such as consumer rights requests are different, nuanced, and complex. They require intelligent automation.

Intelligent automation goes beyond rote database lookups of consumer information. Technology for compliance at scale must be able to process an access request; analyze the request against the requirements of dozens of regulations; and determine what is different in each of those regulations, the risk present in the organization's data practices, and how each regulation must be handled as a result. This kind of intelligent technology provides a holistic view of how data moves across an organization.

The consumer rights request example is not exhaustive, but it illustrates the intelligent and flexible nature of technologies that can help operationalize privacy compliance at scale. What data privacy automation needs is technology that can accommodate existing laws and regulations with rules and logic in order to facilitate privacy program creation and management. This automated technology must be able to adapt to the speed that laws and enforcement actions change by producing a well-documented audit trail that manages not just compliance but the flow of data throughout the entire organization.

Sophisticated technologies like these can deliver data enablement. But to leverage the analyses, intelligent automation can provide, organizations must also be consistent in how they act on technology's insights to scale compliance. Intelligent automation can also establish a framework for driving consistency in business action across multiple regions, laws, and teams. For example, there are more than 130 different privacy laws worldwide. Intelligent automation can quickly search those laws for commonalities and provide actionable advice to business leaders about the risk obligations they face as they pertain to each law. This approach can underpin a scalable program that allows leaders to easily respond to new laws or changes to current laws.

With privacy technology, the word "automated" as a descriptor is used a lot. Yet few business leaders understand what that word currently means and what it needs to mean if they wish to scale their compliance programs. Today's existing automation technologies can help automate rote activities or connect disparate systems via APIs to reduce the reliance on manual tasks. While this form of automation can help with a "check-the-box" approach to privacy compliance, it will not allow organizations to understand how and why data is moving across a company. Only through intelligent automation will organizations learn how to turn data into a strategic asset, helping make privacy decisions quickly to harness the power of data not just to drive compliance but also greater business success.

Related Content:

Check out The Edge, Dark Reading's new section for features, threat data, and in-depth perspectives. Today's featured story: "How InfoSec Pros Can Help Healthcare During the Coronavirus Pandemic."

 

As CEO of TrustArc, formerly known as TRUSTe, Chris has led the company through significant growth and transformation into a leading global privacy compliance and risk management company. Before joining TrustArc, Chris spent over a decade building online trust, most recently ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/25/2020
Hacking Yourself: Marie Moe and Pacemaker Security
Gary McGraw Ph.D., Co-founder Berryville Institute of Machine Learning,  9/21/2020
Startup Aims to Map and Track All the IT and Security Things
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/22/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-15208
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, when determining the common dimension size of two tensors, TFLite uses a `DCHECK` which is no-op outside of debug compilation modes. Since the function always returns the dimension of the first tensor, malicious attackers can ...
CVE-2020-15209
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, a crafted TFLite model can force a node to have as input a tensor backed by a `nullptr` buffer. This can be achieved by changing a buffer index in the flatbuffer serialization to convert a read-only tensor to a read-write one....
CVE-2020-15210
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, if a TFLite saved model uses the same tensor as both input and output of an operator, then, depending on the operator, we can observe a segmentation fault or just memory corruption. We have patched the issue in d58c96946b and ...
CVE-2020-15211
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices f...
CVE-2020-15212
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 2.2.1 and 2.3.1, models using segment sum can trigger writes outside of bounds of heap allocated buffers by inserting negative elements in the segment ids tensor. Users having access to `segment_ids_data` can alter `output_index` and then write to outside of `outpu...