Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Endpoint

5/22/2019
06:00 PM
Connect Directly
Twitter
LinkedIn
Google+
RSS
E-Mail
50%
50%

Data Asset Management: What Do You Really Need?

At Interop, a cybersecurity and privacy leader explains her approach to data management and governance at a massive, decentralized company.

INTEROP 2019 – LAS VEGAS – Nobody wants to admit they don't know what kind of data they're collecting, where it goes, or where their backups are located. In a room packed with IT professionals, one could guess at least a few are grappling with those exact questions.

"It's really critical to us to know what we have," said Stacey Halota, vice president of information security and privacy at Graham Holdings Co., during a keynote chat with Dark Reading senior editor Sara Peters at Interop 2019, held this week in Las Vegas. Each year, Graham Holdings conducts a "sensitive data project" to inventory data from every organization in the company. When the iterative process is complete, she explained, all of the metrics are sent to the board.

Much of the chat focused on data asset management and governance, a hot topic among the IT pro audience. Each data privacy regulation forces IT and security teams to consider data in a different way, Halota said. Consider GDPR, which she said was "a little bit easier" for the global company because it was already regulated by the European Union's Data Protection Directive.

"GDPR is a supercharged version of the directive," Halota noted. But the California Consumer Privacy Act (CCPA), which has a different definition for personally identifiable information (PII), required a broader approach. Graham Holdings had to consider a wider range of device information to ensure its definition of PII was varied enough to include all of the data it stores. Its data protection impact assessment (DIPA) and existing risk assessments had to be repackaged.

New regulations have prompted technological change. The company relies on Archer for much of its data governance and risk compliance. A major step for GDPR, CCPA, and the many bills coming in the future is to repurpose Archer in some ways to define risk assessment and expand its document repository so it collects data needed for different laws.

Halota isn't only concerned with ensuring compliance for Graham Holdings' data. She's also focused on reducing the collection of sensitive data and deleting anything it doesn't need.

"To us, the keystone of our business is information," she said. "It's understanding what you have and what you collect ... what you collect is precious, it's important, and it's important to only collect what we need." If a company doesn't need information, it should delete it. As part of the sensitive data project, Graham Holdings' organizations are not only asked about the information they collected, but that which they deleted. If they didn't delete anything, why?

"We seek to minimize the really sensitive data that we hold," Halota said. For instance, Graham Holdings keeps data in production environments but scrambles it in non-production. It doesn't keep credit card numbers if it can help it, she added; if it has numbers, they're tokenized.

Keeping sensitive data to a minimum is a businesswide effort. Halota has to speak with everyone – CEO, CFO, marketing, human resources, legal – to understand their needs and when it may be time to eliminate data. "It's always a hard conversation," she said, and it can be complicated. "I've had missteps myself ... I think things should be deleted and they don't."

The most important part is building a relationship with different departments. "It's absolutely critical" to talk with every part of the business so she can argue when data isn't valuable to the company. "It's not just saying, 'We're going to delete all this stuff,'" she said.

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance & Technology, where she covered financial ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/25/2020
Hacking Yourself: Marie Moe and Pacemaker Security
Gary McGraw Ph.D., Co-founder Berryville Institute of Machine Learning,  9/21/2020
Startup Aims to Map and Track All the IT and Security Things
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/22/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-15208
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, when determining the common dimension size of two tensors, TFLite uses a `DCHECK` which is no-op outside of debug compilation modes. Since the function always returns the dimension of the first tensor, malicious attackers can ...
CVE-2020-15209
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, a crafted TFLite model can force a node to have as input a tensor backed by a `nullptr` buffer. This can be achieved by changing a buffer index in the flatbuffer serialization to convert a read-only tensor to a read-write one....
CVE-2020-15210
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, if a TFLite saved model uses the same tensor as both input and output of an operator, then, depending on the operator, we can observe a segmentation fault or just memory corruption. We have patched the issue in d58c96946b and ...
CVE-2020-15211
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices f...
CVE-2020-15212
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 2.2.1 and 2.3.1, models using segment sum can trigger writes outside of bounds of heap allocated buffers by inserting negative elements in the segment ids tensor. Users having access to `segment_ids_data` can alter `output_index` and then write to outside of `outpu...