Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Operational Security //

AI

3/30/2018
08:05 AM
Joe Stanganelli
Joe Stanganelli
Joe Stanganelli
50%
50%

GDPR, AI & a New Age of Consent for Enterprises

Despite compliance worries under GDPR, obtaining necessary consent for AI and machine learning processing of personal data is far from impossible.

The European Union's General Data Protection Regulation rules -- GDPR for short -- are about to change the relationship between people, their personal data and how businesses handle that information.

However, there's another factor enterprises and their customers have to consider as well -- artificial intelligence.

Article 22 of the GDPR -- also known as Regulation 2016/679 -- dictates that people protected by the new rules generally cannot be subjected to purely automated decision-making, including profiling, without their consent, especially if that decision-making "produces legal effects concerning him or her or similarly significantly affects him or her."

Consequently, there are concerns that GDPR will throw an enormous monkey wrench into consumer AI use cases when it comes into effect on May 25. From a practical perspective, decision-making by machine-learning algorithms and other AI systems is not as straightforward as that of traditional systems -- making the issue of informed, explicit consent a sticky issue.

"[A]s AI systems often rely on machine learning, a disclosure of algorithms does not provide a full and thorough picture of how a decision was reached, as the learning component has not been factored in," argue Frankfurt attorneys Sven Jacobs and Christoph Ritzer in a blog post.

This argument is a bit of falling-sky doomsaying. Of course everyday users are not going to understand complex machine-learning algorithms or the intricacies of virtualized networking.

Even the EU member-state data-protection authorities (DPAs) -- the agencies responsible for enforcing GDPR -- have accounted for that. Having worked together to jointly adopt "Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679" in 2017 ("Guidelines"), DPAs expressly advise organizations that the obligation to provide "meaningful information about the 'logic involved'" means that data controllers "should find simple ways" to explain various rationale and criteria at work "without necessarily always attempting a complex explanation of the algorithms used or disclosure of the full algorithm."

The Guidelines also provide an example for how these disclosures should work:

  • Details of the main characteristics considered in reaching a particular automation-reliant decision and their relevance.
  • The respective sources of all such data (e.g., application forms, account details, public records, third parties, user behavior, etc.),
  • "[I]nformation to advise the data subject that the … methods used are regularly tested to ensure they remain fair, effective and unbiased", and
  • Contact details and related information on how a data subject can request a review of the pertinent automated decision(s).

Notwithstanding the DPAs themselves using these factors to describe a GDPR-compliant example, this still seems like a long -- and not necessarily exhaustive -- litany to present to the user -- potentially making for a disclosure so lengthy as to constitute inaccessible legalese in violation of Article 7, Section 2, of GDPR ("[T]he request for consent shall be presented … in an intelligible and easily accessible form, using clear and plain language") and Recital 32 of GDPR ("If the data subject's consent is to be given following a request by electronic means, the request must be clear, concise and not unnecessarily disruptive to the use of the service for which it is provided").

Making the law clear
Still, legal onuses for clarity and concision traditionally tend to focus more on language simplicity than on de facto imposition of an arbitrary word count; besides, simpler language naturally tends to lead to shorter clauses.

AI-using data processors can take further comfort in the fact that additional guidance on consent-worthy language can be found in Council Directive 93/13/EEC of 5 April 1993 on unfair terms in consumer contracts ("Directive 93/13/EEC"). Recital 42 of GDPR expressly states that Directive 93/13/EEC -- itself requiring consumer contracts to be written in "plain, intelligible language" -- governs the mandate of providing "an intelligible and easily accessible form, using clear and plain language [without] unfair terms" in boilerplate consent declarations.

In other words, complying with the complicated consent requirements under GDPR for AI-based decision-making should theoretically present no greater burden than complying with the same requirements under 25-year-old EU contract law.

Moreover, this being the age of digital media, the Guidelines go on to recommend a variety of innovative techniques to make AI processing of personal data at once more GDPR-compliant and more user-friendly, such as:

  • Layered, step-by-step notifications -- including short-form notifications with expandable links to the "full version," combined with "a just-in-time notification at the point where data is collected";
  • Graphics, charts, and other "interactive" multimedia methods to better explain algorithmic function;
  • and
  • Standardized icons to describe what information is being used when, shared with whom, and/or to decide what.

All of this adds up to one key takeaway for enterprises: As long as you get your data lineage in order -- which you should be doing anyway -- and make a decent effort to identify decision fundamentals to data subjects from whom you're seeking consent, AI and GDPR can coexist.

Related posts:

—Joe Stanganelli, principal of Beacon Hill Law, is a Boston-based attorney, corporate-communications and data-privacy consultant, writer, and speaker. Follow him on Twitter at @JoeStanganelli.

(Disclaimer: This article is provided for informational, educational and/or entertainment purposes only. Neither this nor other articles here constitute legal advice or the creation, implication or confirmation of an attorney-client relationship. For actual legal advice, personally consult with an attorney licensed to practice in your jurisdiction.)

Comment  | 
Print  | 
More Insights
Comments
Threaded  |  Newest First  |  Oldest First
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/21/2020
Hacking Yourself: Marie Moe and Pacemaker Security
Gary McGraw Ph.D., Co-founder Berryville Institute of Machine Learning,  9/21/2020
Startup Aims to Map and Track All the IT and Security Things
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/22/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-24213
PUBLISHED: 2020-09-23
An integer overflow was discovered in YGOPro ygocore v13.51. Attackers can use it to leak the game server thread's memory.
CVE-2020-2279
PUBLISHED: 2020-09-23
A sandbox bypass vulnerability in Jenkins Script Security Plugin 1.74 and earlier allows attackers with permission to define sandboxed scripts to provide crafted return values or script binding content that can result in arbitrary code execution on the Jenkins controller JVM.
CVE-2020-2280
PUBLISHED: 2020-09-23
A cross-site request forgery (CSRF) vulnerability in Jenkins Warnings Plugin 5.0.1 and earlier allows attackers to execute arbitrary code.
CVE-2020-2281
PUBLISHED: 2020-09-23
A cross-site request forgery (CSRF) vulnerability in Jenkins Lockable Resources Plugin 2.8 and earlier allows attackers to reserve, unreserve, unlock, and reset resources.
CVE-2020-2282
PUBLISHED: 2020-09-23
Jenkins Implied Labels Plugin 0.6 and earlier does not perform a permission check in an HTTP endpoint, allowing attackers with Overall/Read permission to configure the plugin.