Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Vulnerabilities / Threats

10:00 AM
Mike Kiser
Mike Kiser
Connect Directly
E-Mail vvv

A Pause to Address 'Ethical Debt' of Facial Recognition

Ethical use will require some combination of consistent reporting, regulation, corporate responsibility, and adversarial technology.

Earlier this summer, the US Technology Policy Committee of the Association for Computing Machinery published a letter calling for the suspension of "current and future private and governmental use of [facial recognition] technologies in all circumstances known or reasonably foreseeable to be prejudicial to established human and legal rights." 

The ACM is arguing that facial recognition is not mature enough to be used well, its potential has driven presumptive adoption of the technology, and that its use has compromised privacy and other human rights., They also believe its use should be paused until legal standards for accuracy, transparency, governance, risk management, and accountability can be established.

Related Content:

MFA-Minded Attackers Continue to Figure Out Workarounds

2020 State of Cybersecurity Operations and Incident Response

New on The Edge: What's Really Happening in Infosec Hiring Now?

This letter follows actions by large enterprises, which have restricted or halted access to facial recognition. In June, IBM announced that it would stop selling "general purpose" facial recognition software, and Amazon and Microsoft soon announced bans on selling facial recognition technology to law enforcement until legislation is passed to govern the technology. Recent headlines have demonstrated how facial recognition systems are perpetuating bias in law enforcement, hiring, and school surveillance. The industry is right to pause the development of this technology while they ponder potential side effects and develop an ethical approach to facial recognition.

Lather, Rinse, Repeat: With a Twist
Technology and ethics are often opposing forces. This call for careful deliberation is similar to previous ethical discussions of machine learning models. The letter cites ACM's earlier statement on algorithmic transparency and accountability as a foundation for this latest round of ethical exploration. Concepts such as transparency and accountability are common in ethical frameworks, but they haven't historically led to a call for a pause in access to technology.

Many technologies are difficult to understand or their impacts are hard to gauge. With facial recognition, the opposite is true. News coverage in the past few years has led the public to understand how facial recognition works and to see their perpetuation of cultural bias and discrimination (see Joy Buolamwini's TEDTalk and the Algorithmic Justice League for more detail). People are quick to realize the dangers of ubiquitous surveillance, even if they’re not the targets of active discrimination (Thanks, George Orwell!). This understanding of the technology and risks means that facial recognition is having a unique moment; a caesura in the rush to innovate, a unique pause for moral introspection.

Admirable, But Questions Remain
This pause is needed. All too often, ethics lags technology. With all apologies to Jeff Goldblum, there's no need to be hunted by intelligent dinosaurs to realize that we often do things because "we can rather than that we should." This ACM's call for restraint is appropriate, although a few issues remain.

What about the facial data that already exists from currently deployed systems? This is not unique to facial recognition, but rather one that is well known from GDPR compliance and other use cases.

The stoppage is intended for private and public entities, but personal cameras — and an opening for facial recognition — are rapidly becoming ubiquitous. Log in to your neighborhood watch program for a close-to-home example. (What street doesn't have a doorbell camera?) Public life is being monitored and passive data on our habits and lives is continually collected; any place that there is a camera, facial recognition technology is in play.

The call by the ACM could be stronger. They urge the immediate suspension of use of facial recognition technology anywhere that is "known or reasonably foreseeable to be prejudicial to established human and legal rights." What is considered reasonable here? Is good intent enough to absolve misuse of these systems from blame, for instance? The potential harm of these systems — and the repurposing of its data — is often not readily apparent. By the time the bias is observed, the damage has been done. Given the risks and the uncertainty involved, it would be better to remove the call's dependency on expected harm. The use of facial recognition should be suspended until its ethical impact can be documented and governed properly.

Government Response
Governments have taken notice of public concern, of course, and have responded with proposed legislation. Several US cities, including Boston, Portland, and San Francisco, have banned the use of the technology. (See: US map of use and bans of facial recognition.)

There is also action on the national level. Currently proposed legislation in the US seeks to govern or declare a moratorium on facial recognition technology. In Europe, a five-year hiatus on the use of facial recognition in public spaces was proposed last year but was subsequently dropped this past January. These efforts are welcome, but if ethics lags technology, legislation is slower still.

Adversarial Technology
Another approach may be useful as well. Recently, researchers have developed "adversarial technology," using innovation to equip people to defeat location tracking, artificial intelligence, and other components of surveillance systems. These have run the gamut from using fashion to defeat license plate camera systems to full on fabrication of fake identities and personas to throw off location and online tracking.

This adversarial approach has now been developed for facial recognition as well, with the most notable being Fawkes, an open source tool released by researchers from the University of Chicago. Rather than making physical changes to a person's face, it seeks to mask photographs with slight alterations. Though these changes are not prominent to the human eye, this tricks the facial recognition system into misidentifying the person — cloaking the individual's true identity. Over time, an increasing set of altered photos is incorporated into the collection of images that facial recognition systems use to catalogue and identify people, polluting its knowledge base and protecting the true identity of the individual.

A Pause for Reflection
The ACM is right to call for a suspension in the use of facial recognition to address bias and abuse, but our path towards ethical use of this kind of technology is likely not a straight, clear line. A combination of approaches is necessary to make responsible progress; consistent reporting on surveillance technology, governmental regulation, a sense of corporate responsibility, and adversarial technology all have their role to play. These approaches take time, and the ACM is correct to call for a break to allow these approaches time to develop.

It's time to address our ethical debt.

Mike Kiser is a security professional with 20 years of experience. He has designed, directed and advised on large-scale security deployments for a global clientele. He recently presented at RSA Conference, Black Hat and DEF CON. Mike co-hosts the podcast, Mistaken Identity, ... View Full Bio

Recommended Reading:

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
7 Old IT Things Every New InfoSec Pro Should Know
Joan Goodchild, Staff Editor,  4/20/2021
Cloud-Native Businesses Struggle With Security
Robert Lemos, Contributing Writer,  5/6/2021
Defending Against Web Scraping Attacks
Rob Simon, Principal Security Consultant at TrustedSec,  5/7/2021
Register for Dark Reading Newsletters
White Papers
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
How Enterprises are Developing Secure Applications
How Enterprises are Developing Secure Applications
Recent breaches of third-party apps are driving many organizations to think harder about the security of their off-the-shelf software as they continue to move left in secure software development practices.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2021-05-14
A heap buffer overflow read was discovered in upx 4.0.0, because the check in p_lx_elf.cpp is not perfect.
PUBLISHED: 2021-05-14
A Zip Slip vulnerability was found in the oc binary in openshift-clients where an arbitrary file write is achieved by using a specially crafted raw container image (.tar file) which contains symbolic links. The vulnerability is limited to the command `oc image extract`. If a symbolic link is first c...
PUBLISHED: 2021-05-14
A UI misrepresentation vulnerability was identified in GitHub Enterprise Server that allowed more permissions to be granted during a GitHub App's user-authorization web flow than was displayed to the user during approval. To exploit this vulnerability, an attacker would need to create a GitHub App o...
PUBLISHED: 2021-05-14
Apache Traffic Server 9.0.0 is vulnerable to a remote DOS attack on the experimental Slicer plugin.
PUBLISHED: 2021-05-14
Firely/Incendi Spark before 1.5.5-r4 lacks Content-Disposition headers in certain situations, which may cause crafted files to be delivered to clients such that they are rendered directly in a victim's web browser.