Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Vulnerabilities / Threats

10/23/2020
10:00 AM
Mike Kiser
Mike Kiser
Commentary
Connect Directly
Twitter
LinkedIn
RSS
E-Mail vvv
100%
0%

A Pause to Address 'Ethical Debt' of Facial Recognition

Ethical use will require some combination of consistent reporting, regulation, corporate responsibility, and adversarial technology.

Earlier this summer, the US Technology Policy Committee of the Association for Computing Machinery published a letter calling for the suspension of "current and future private and governmental use of [facial recognition] technologies in all circumstances known or reasonably foreseeable to be prejudicial to established human and legal rights." 

The ACM is arguing that facial recognition is not mature enough to be used well, its potential has driven presumptive adoption of the technology, and that its use has compromised privacy and other human rights., They also believe its use should be paused until legal standards for accuracy, transparency, governance, risk management, and accountability can be established.

Related Content:

MFA-Minded Attackers Continue to Figure Out Workarounds

2020 State of Cybersecurity Operations and Incident Response

New on The Edge: What's Really Happening in Infosec Hiring Now?

This letter follows actions by large enterprises, which have restricted or halted access to facial recognition. In June, IBM announced that it would stop selling "general purpose" facial recognition software, and Amazon and Microsoft soon announced bans on selling facial recognition technology to law enforcement until legislation is passed to govern the technology. Recent headlines have demonstrated how facial recognition systems are perpetuating bias in law enforcement, hiring, and school surveillance. The industry is right to pause the development of this technology while they ponder potential side effects and develop an ethical approach to facial recognition.

Lather, Rinse, Repeat: With a Twist
Technology and ethics are often opposing forces. This call for careful deliberation is similar to previous ethical discussions of machine learning models. The letter cites ACM's earlier statement on algorithmic transparency and accountability as a foundation for this latest round of ethical exploration. Concepts such as transparency and accountability are common in ethical frameworks, but they haven't historically led to a call for a pause in access to technology.

Many technologies are difficult to understand or their impacts are hard to gauge. With facial recognition, the opposite is true. News coverage in the past few years has led the public to understand how facial recognition works and to see their perpetuation of cultural bias and discrimination (see Joy Buolamwini's TEDTalk and the Algorithmic Justice League for more detail). People are quick to realize the dangers of ubiquitous surveillance, even if they’re not the targets of active discrimination (Thanks, George Orwell!). This understanding of the technology and risks means that facial recognition is having a unique moment; a caesura in the rush to innovate, a unique pause for moral introspection.

Admirable, But Questions Remain
This pause is needed. All too often, ethics lags technology. With all apologies to Jeff Goldblum, there's no need to be hunted by intelligent dinosaurs to realize that we often do things because "we can rather than that we should." This ACM's call for restraint is appropriate, although a few issues remain.

What about the facial data that already exists from currently deployed systems? This is not unique to facial recognition, but rather one that is well known from GDPR compliance and other use cases.

The stoppage is intended for private and public entities, but personal cameras — and an opening for facial recognition — are rapidly becoming ubiquitous. Log in to your neighborhood watch program for a close-to-home example. (What street doesn't have a doorbell camera?) Public life is being monitored and passive data on our habits and lives is continually collected; any place that there is a camera, facial recognition technology is in play.

The call by the ACM could be stronger. They urge the immediate suspension of use of facial recognition technology anywhere that is "known or reasonably foreseeable to be prejudicial to established human and legal rights." What is considered reasonable here? Is good intent enough to absolve misuse of these systems from blame, for instance? The potential harm of these systems — and the repurposing of its data — is often not readily apparent. By the time the bias is observed, the damage has been done. Given the risks and the uncertainty involved, it would be better to remove the call's dependency on expected harm. The use of facial recognition should be suspended until its ethical impact can be documented and governed properly.

Government Response
Governments have taken notice of public concern, of course, and have responded with proposed legislation. Several US cities, including Boston, Portland, and San Francisco, have banned the use of the technology. (See: US map of use and bans of facial recognition.)

There is also action on the national level. Currently proposed legislation in the US seeks to govern or declare a moratorium on facial recognition technology. In Europe, a five-year hiatus on the use of facial recognition in public spaces was proposed last year but was subsequently dropped this past January. These efforts are welcome, but if ethics lags technology, legislation is slower still.

Adversarial Technology
Another approach may be useful as well. Recently, researchers have developed "adversarial technology," using innovation to equip people to defeat location tracking, artificial intelligence, and other components of surveillance systems. These have run the gamut from using fashion to defeat license plate camera systems to full on fabrication of fake identities and personas to throw off location and online tracking.

This adversarial approach has now been developed for facial recognition as well, with the most notable being Fawkes, an open source tool released by researchers from the University of Chicago. Rather than making physical changes to a person's face, it seeks to mask photographs with slight alterations. Though these changes are not prominent to the human eye, this tricks the facial recognition system into misidentifying the person — cloaking the individual's true identity. Over time, an increasing set of altered photos is incorporated into the collection of images that facial recognition systems use to catalogue and identify people, polluting its knowledge base and protecting the true identity of the individual.

A Pause for Reflection
The ACM is right to call for a suspension in the use of facial recognition to address bias and abuse, but our path towards ethical use of this kind of technology is likely not a straight, clear line. A combination of approaches is necessary to make responsible progress; consistent reporting on surveillance technology, governmental regulation, a sense of corporate responsibility, and adversarial technology all have their role to play. These approaches take time, and the ACM is correct to call for a break to allow these approaches time to develop.

It's time to address our ethical debt.

Mike Kiser is a security professional with 20 years of experience. He has designed, directed and advised on large-scale security deployments for a global clientele. He recently presented at RSA Conference, Black Hat and DEF CON. Mike co-hosts the podcast, Mistaken Identity, ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Look Beyond the 'Big 5' in Cyberattacks
Robert Lemos, Contributing Writer,  11/25/2020
Why Vulnerable Code Is Shipped Knowingly
Chris Eng, Chief Research Officer, Veracode,  11/30/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win an Amazon Gift Card! Click Here
Latest Comment: I think the boss is bing watching '70s TV shows again!
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-29458
PUBLISHED: 2020-12-02
Textpattern CMS 4.6.2 allows CSRF via the prefs subsystem.
CVE-2020-29456
PUBLISHED: 2020-12-02
Multiple cross-site scripting (XSS) vulnerabilities in Papermerge before 1.5.2 allow remote attackers to inject arbitrary web script or HTML via the rename, tag, upload, or create folder function. The payload can be in a folder, a tag, or a document's filename. If email consumption is configured in ...
CVE-2020-5423
PUBLISHED: 2020-12-02
CAPI (Cloud Controller) versions prior to 1.101.0 are vulnerable to a denial-of-service attack in which an unauthenticated malicious attacker can send specially-crafted YAML files to certain endpoints, causing the YAML parser to consume excessive CPU and RAM.
CVE-2020-29454
PUBLISHED: 2020-12-02
Editors/LogViewerController.cs in Umbraco through 8.9.1 allows a user to visit a logviewer endpoint even if they lack Applications.Settings access.
CVE-2020-7199
PUBLISHED: 2020-12-02
A security vulnerability has been identified in the HPE Edgeline Infrastructure Manager, also known as HPE Edgeline Infrastructure Management Software. The vulnerability could be remotely exploited to bypass remote authentication leading to execution of arbitrary commands, gaining privileged access,...