Cloud

10/5/2018
10:30 AM
Richard Ford
Richard Ford
Commentary
Connect Directly
Twitter
RSS
E-Mail vvv
100%
0%

Who Do You Trust? Parsing the Issues of Privacy, Transparency & Control

Technology such as Apple's device trust score that decides "you" is "not you" is a good thing. But only if it works well.

Trust. It's a simple word, but I think it's at the heart of a recent social media brouhaha surrounding Apple's recent revelations regarding "iTunes device trust scores." Much of the discussion has made this whole situation sound rather dystopian, but in part I think the story taps into some very fundamental — and legitimate — fears that the modern consumer has about how the minutiae of their lives has become a product to be bought, sold, and traded.

But when I dug deeper into the story, the thing that caught my attention was quite the opposite: At face value, at least, the technology is not to determine if the owner of the device is trustworthy but to protect that person from someone who has stolen or is otherwise abusing the device. Put like that, it sounds significantly better.

To me, those issues around trust are why this story resonated so strongly … well, that and the fact that simply the way a feature is described can have an incredibly powerful impact, both positive and negative, on our psyche. On the one hand, we don't want our devices to decide if they trust us or not — that feels like only a few mouse clicks away from HAL calmly intoning "I'm sorry, Dave, I'm afraid I can’t do that." No pod bay doors for you!

On the other hand, I think that the concept of trust is woefully underused as a mechanism for providing protection for end users. In part, that's based on my own personal experiences working with companies that know everything there is to know about you. We have a right to be skeptical, and that skepticism comes back to the simple word we began with: trust.

Let's look at a hypothetical. Nobody wants someone to do bad things to their accounts from their phone. Thus, technology that decides "you" are "not you" is a good thing, if it works well. Even better, if you retain absolute control over the data used to make that decision, how it is used, and how it is protected, the overall privacy exposure is minimal. In this case, Apple sounds like it's doing the right thing with respect to privacy. Quoting from the same VentureBeat article, Apple says that the "only data it receives is the numeric score, which is computed on-device using the company's standard privacy abstracting techniques, and retained only for a limited period, without any way to work backward from the score to user behavior." So far, so good.

The breakdown here is the lack of trust most users have in services that offer them "better" in exchange for being able to access their data. Even if the provider of the service makes claims about protection of privacy or the single use of data collection, there's a healthy degree of suspicion among consumers. Trusting that a company is both well-intentioned in accessing one's data and is capable of actually implementing appropriate protections around it is a bit of a stretch in the current climate. It's interesting that consumers continue to use services like that — but I think it's safe to say it makes them uneasy. And it's a matter of trust.

Repairing damaged consumer trust is going to take time. We've seen some good progress on the legal front with the adoption of laws such as the EU's General Data Protection Regulation, but worldwide, the legislative framework is a patchwork at best. Furthermore, laws always lag sorely behind technology and, of course, there's always someone who's willing to run the risk of coloring outside these legal lines in order to make a quick buck (or ruble). In the interim, the solution is simple: Let's opt instead for control.

Control may seem oddly orthogonal to trust, but in fact it's related. As I like to think of it, trust is "a promise as yet unfulfilled." It's a bet, if you like, on the actions of another. Control, on the other hand, is a way of ensuring that outcome or action. It's a substitute (and a poor one at that) for trust, but it can bridge the gap until trust is established. With control, we can be reasonably sure of what's going to happen, in advance. By all means, build these systems with privacy baked in (privacy by design is a wonderful thing!) but then prove it. Open the system up to third-party inspection and audit. Transparency is a wonderful way of demonstrating what's really happening. It's hard, and it's imperfect — but it's a start.

If the best companies start actually doing this, everyone wins. Trust and reputation are powerful forces for good, and we need to harness them if we're to make progress. There's nothing really wrong with a device assessing a user's trustworthiness, but without the user trusting the system in turn, it's predestined to fail. Until we have bidirectional trust, transparency is the best way forward — there's no shortcut.

Related Content:

 

Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Dr. Richard Ford is the chief scientist for Forcepoint, overseeing technical direction and innovation throughout the business. He brings over 25 years' experience in computer security, with knowledge in both offensive and defensive technology solutions. During his career, ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Higher Education: 15 Books to Help Cybersecurity Pros Be Better
Curtis Franklin Jr., Senior Editor at Dark Reading,  12/12/2018
Worst Password Blunders of 2018 Hit Organizations East and West
Curtis Franklin Jr., Senior Editor at Dark Reading,  12/12/2018
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
10 Best Practices That Could Reshape Your IT Security Department
This Dark Reading Tech Digest, explores ten best practices that could reshape IT security departments.
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2018-20161
PUBLISHED: 2018-12-15
A design flaw in the BlinkForHome (aka Blink For Home) Sync Module 2.10.4 and earlier allows attackers to disable cameras via Wi-Fi, because incident clips (triggered by the motion sensor) are not saved if the attacker's traffic (such as Dot11Deauth) successfully disconnects the Sync Module from the...
CVE-2018-20159
PUBLISHED: 2018-12-15
i-doit open 1.11.2 allows Remote Code Execution because ZIP archives are mishandled. It has an upload feature that allows an authenticated user with the administrator role to upload arbitrary files to the main website directory. Exploitation involves uploading a ".php" file within a "...
CVE-2018-20157
PUBLISHED: 2018-12-15
The data import functionality in OpenRefine through 3.1 allows an XML External Entity (XXE) attack through a crafted (zip) file, allowing attackers to read arbitrary files.
CVE-2018-20154
PUBLISHED: 2018-12-14
The WP Maintenance Mode plugin before 2.0.7 for WordPress allows remote authenticated users to discover all subscriber e-mail addresses.
CVE-2018-20155
PUBLISHED: 2018-12-14
The WP Maintenance Mode plugin before 2.0.7 for WordPress allows remote authenticated subscriber users to bypass intended access restrictions on changes to plugin settings.