Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Attacks/Breaches

8/11/2020
07:00 PM
Connect Directly
Twitter
LinkedIn
RSS
E-Mail
50%
50%

Researchers Trick Facial-Recognition Systems

Goal was to see if computer-generated images that look like one person would get classified as another person.

Neural networks powered by recent advances in artificial intelligence and machine learning technologies increasingly have become adept at generating photo-realistic images of human faces completely from scratch.

The systems typically use a dataset comprised of millions of images of real people to "learn" over a period of time how to autonomously generate original images of their own.

At the Black Hat USA 2020 virtual event last week, researchers from McAfee showed how they were able to use such technologies to successfully trick a facial-recognition system into misclassifying one individual as an entirely different person. As an example, the researchers showed how at an airport an individual on a no-fly list could trick a facial-recognition system used for passport verification into identifying him as another person.

"The basic goal here was to determine if we could create a fake image, using machine learning models, which looked like one person to the human eye, but simultaneously classified as another person to a facial recognition system," says Steve Povolny, head of advanced threat research at McAfee.

To do that, the researchers built a machine-learning model and fed it training data: a set of 1,500 photos of two separate individuals. The images were captured from live video and sought to accurately represent valid passport photos of the two people.

The model then continuously created and tested fake images of the two individuals by blending the facial features of both subjects. Over hundreds of training loops, the machine-learning model eventually got to a point where it was generating images that looked like a valid passport photo of one of the individuals: even as the facial recognition system identified the photo as the other person.

Povolny says the passport-verification system attack scenario — though not the primary focus of the research — is theoretically possible to carry out. Because digital passport photos are now accepted, an attacker can produce a fake image of an accomplice, submit a passport application, and have the image saved in the passport database. So if a live photo of the attacker later gets taken at an airport — at an automated passport-verification kiosk, for instance — the image would be identified as that of the accomplice.

"This does not require the attacker to have any access at all to the passport system; simply that the passport-system database contains the photo of the accomplice submitted when they apply for the passport," he says.  

The passport system simply relies on determining if two faces match or do not match. All it does is verify if a photo of one person is identified against a saved photo in the back end. So such an attack is entirely feasible, though it requires some effort to pull off, Povolny says.

"It is less likely that a physical passport photo that was mailed in, scanned, and uploaded to this database, would work for the attack," he notes.

Generative Adversarial Networks

McAfee's research involved the use of a so-called Generative Adversarial Network (GAN) known as CycleGAN. GANs are neural networks capable of independently creating data that is very similar to data that is input into them. For example, a GAN can use a set of real images of human faces or of horses to autonomously generate completely synthetic — but very real-looking — images of human faces and horses. GANs use what are known as generative networks to generate the synthetic data, and discriminative networks to continuously assess the quality of the generated content until it reaches acceptable quality.

CycleGAN itself, according to McAfee, is a GAN for image-to-image translation: translating an image of zebras to an image of horses, for example. One feature of the GAN is that it uses significant features of an image for translation, such as eye placement, shape of head, body size, and other attributes.  

In addition to CycleGAN, the McAfee researchers also used a facial-recognition architecture called FaceNet originally developed by Google for image classification. Building and training the machine-learning model took a period of several months.

"While we would have loved to have access to a real-world target system to replicate this, we are thrilled with the results of achieving positive misclassifications in white box and gray-box scenarios," Povolny says.

Given the increasingly important role that facial recognition systems have begun playing in law enforcement and other areas, more proactive research is needed to understand all of the ways such systems can be attacked, he says.

"Anomaly testing, adversarial input, and more diverse training data are among the ways that vendors can improve facial recognition systems," Povolny notes. "Additionally, defense-in-depth, leveraging a second system, whether human or machine, can provide a much higher bar to exploitation than a single point of failure."

Related Content:

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
NSA Appoints Rob Joyce as Cyber Director
Dark Reading Staff 1/15/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win an Amazon Gift Card! Click Here
Latest Comment: I like the old version of Google assistant much better.
Current Issue
2020: The Year in Security
Download this Tech Digest for a look at the biggest security stories that - so far - have shaped a very strange and stressful year.
Flash Poll
Assessing Cybersecurity Risk in Today's Enterprises
Assessing Cybersecurity Risk in Today's Enterprises
COVID-19 has created a new IT paradigm in the enterprise -- and a new level of cybersecurity risk. This report offers a look at how enterprises are assessing and managing cyber-risk under the new normal.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-8567
PUBLISHED: 2021-01-21
Kubernetes Secrets Store CSI Driver Vault Plugin prior to v0.0.6, Azure Plugin prior to v0.0.10, and GCP Plugin prior to v0.2.0 allow an attacker who can create specially-crafted SecretProviderClass objects to write to arbitrary file paths on the host filesystem, including /var/lib/kubelet/pods.
CVE-2020-8568
PUBLISHED: 2021-01-21
Kubernetes Secrets Store CSI Driver versions v0.0.15 and v0.0.16 allow an attacker who can modify a SecretProviderClassPodStatus/Status resource the ability to write content to the host filesystem and sync file contents to Kubernetes Secrets. This includes paths under var/lib/kubelet/pods that conta...
CVE-2020-8569
PUBLISHED: 2021-01-21
Kubernetes CSI snapshot-controller prior to v2.1.3 and v3.0.2 could panic when processing a VolumeSnapshot custom resource when: - The VolumeSnapshot referenced a non-existing PersistentVolumeClaim and the VolumeSnapshot did not reference any VolumeSnapshotClass. - The snapshot-controller crashes, ...
CVE-2020-8570
PUBLISHED: 2021-01-21
Kubernetes Java client libraries in version 10.0.0 and versions prior to 9.0.1 allow writes to paths outside of the current directory when copying multiple files from a remote pod which sends a maliciously crafted archive. This can potentially overwrite any files on the system of the process executi...
CVE-2020-8554
PUBLISHED: 2021-01-21
Kubernetes API server in all versions allow an attacker who is able to create a ClusterIP service and set the spec.externalIPs field, to intercept traffic to that IP address. Additionally, an attacker who is able to patch the status (which is considered a privileged operation and should not typicall...