Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Threat Intelligence

Using Adversarial Machine Learning, Researchers Look to Foil Facial Recognition

For privacy-seeking users, good news: Computer scientists are finding more ways to thwart facial and image recognition. But there's also bad news: Gains will likely be short-lived.

Suspect identification using massive databases of facial images. Reputational attacks through deep fakes videos. Security access using the face as a biometric. Facial recognition is quickly becoming a disruptive technology with few limits imposed by privacy policy. 

Academic researchers, however, have found ways to — at least temporarily — cause problems for certain classes of facial-recognition algorithms, taking advantages of weaknesses in the training algorithm or the resultant recognition model. Last week, a team of computer-science researchers at the National University of Singapore (NUS) published a technique that locates the areas of an image where changes can best disrupt image-recognition algorithms, but where those changes are least noticeable to humans.

The technique is general in that it can be used to develop an attack against other machine-learning (ML) algorithms, but the researchers only developed a specific instance, says Mohan Kankanhalli, a professor in the NUS Department of Computer Science and co-author of a paper on the adversarial attack.

"Currently, we need to know the class [of algorithm] and can develop a solution for that," he says. "We are working on its generalization, to have one solution that works for every class, current and future. However, that is nontrivial and hence we anticipate it will take time."

The research raises the possibility of creating photos that people can easily perceive but that foils commonly used facial-recognition algorithms. Turned into a filter, for example, the technique could allow users to add imperceptible changes to photos to make them more difficult for ML algorithms to classify and foil the development of reverse image search engines.

Such methods, however, currently take advantage of the brittleness of training algorithms. As ML-focused companies develop more robust algorithms, privacy-seeking Internet users will have to decide whether to degrade photos to the point where the changes are noticeable to humans.

"It's a very difficult problem because pictures themselves have utility for us," says Joey Bose, a PhD student in computer science at McGill University, who published related adversarial ML research in 2018. "But the only surefire way to guarantee privacy is to remove content from the image, and if you remove content, it becomes less useful."

The developments come as several companies are using — some would say abusing — the plethora of image content readily available on the Internet. Social media sites make it easy to gather an enormous number of images on which to train neural networks, allowing companies to create sophisticated recognition models or turn people's photos into deep-fake videos. Clearview.ai, a company that sells its ability to find people's online presence from a provided photo, collected hundreds of millions of photos from social media sites to create its large reverse image search engine.

The NUS technique has to be tailored for the specific class of facial recognition, which means the researchers have to know the details of the facial-recognition system. Only then can they create a technique for confounding that approach.

While such techniques could help people protect their images and their privacy, most of the research is aimed at preventing such attacks in the future: Far from just attacking ML algorithms, the researchers are basically performing the role of a red team, checking the quality of the models, says McGill's Bose. 

"The research can inform policy and help companies know what they need to check off before these systems are put into the wild," he says. "Better models are more reliable."

And even if the researchers did find an approach that could foil the most common approaches to facial recognition, the surveillance system makers would likely be able to find ways to work around the technology, says the National University of Singapore's Kankanhalli.

Any success will be "temporary in the sense it is like a cat-and-mouse game," Kankanhalli explains. "There will be better techniques developed by the data harvesters, and in response there will be better privacy techniques to counter them. There probably will never be a final solution in such constantly evolving  problem areas."

Unlike encryption, which can provide privacy even against well-funded adversaries, ML attacks often only work until the model is retrained, he says. The best way to limit artificial intelligence is through policy, Kankanhalli says.

"When consumers frown upon companies who violate privacy, then there will be a backlash against them, which will change the behavior of such companies," he says. "We as consumers and users should therefore articulate what is acceptable and what is not. I strongly believe we need work on all these three fronts to tackle this data harvesting problem."

Related Content:

 

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Overcoming the Challenge of Shorter Certificate Lifespans
Mike Cooper, Founder & CEO of Revocent,  10/15/2020
7 Tips for Choosing Security Metrics That Matter
Ericka Chickowski, Contributing Writer,  10/19/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-27621
PUBLISHED: 2020-10-22
The FileImporter extension in MediaWiki through 1.35.0 was not properly attributing various user actions to a specific user's IP address. Instead, for various actions, it would report the IP address of an internal Wikimedia Foundation server by omitting X-Forwarded-For data. This resulted in an inab...
CVE-2020-27620
PUBLISHED: 2020-10-22
The Cosmos Skin for MediaWiki through 1.35.0 has stored XSS because MediaWiki messages were not being properly escaped. This is related to wfMessage and Html::rawElement, as demonstrated by CosmosSocialProfile::getUserGroups.
CVE-2020-27619
PUBLISHED: 2020-10-22
In Python 3 through 3.9.0, the Lib/test/multibytecodec_support.py CJK codec tests call eval() on content retrieved via HTTP.
CVE-2020-17454
PUBLISHED: 2020-10-21
WSO2 API Manager 3.1.0 and earlier has reflected XSS on the "publisher" component's admin interface. More precisely, it is possible to inject an XSS payload into the owner POST parameter, which does not filter user inputs. By putting an XSS payload in place of a valid Owner Name, a modal b...
CVE-2020-24421
PUBLISHED: 2020-10-21
Adobe InDesign version 15.1.2 (and earlier) is affected by a memory corruption vulnerability due to insecure handling of a malicious .indd file, potentially resulting in arbitrary code execution in the context of the current user. User interaction is required to exploit this vulnerability.