Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Endpoint Security

// // //

iPhone's Facial Recognition Shows Cracks

A research firm says that it has successfully spoofed the facial recognition technology used in Apple's flagship iPhone X.

Multi-factor authentication is becoming a "must" for many applications but questions remain about which factors are secure. A recent report from researchers in Vietnam has cast doubts on one promising new factor now available to millions.

In September, Apple announced the iPhone X with much fanfare and a flurry of new technology components. One of the most discussed is its facial recognition technology, which Apple has touted as being convenient, low-friction and very, very secure. Bkav, a security firm based in Vietnam, doesn't dispute the first two qualities but says that the security aspect may be somewhat over-stated.

In a test, researchers at Bkav said that they were able to defeat the iPhone X's facial recognition technology -- technology that Apple claims is not vulnerable to spoofing or mistaken identity -- using a mask made with approximately $150 in materials. While the spoof has yet to be confirmed by other researchers, the possibility raises some discomfiting possibilities.

The most troubling aspect of the demonstration is that the spoof was pulled off using a mask, after Apple went to great pains to show that their technology would only work with the living face of the device owner. In a blog post, Bkav said that they listened carefully to Apple's statements, worked to understand the AI used in the facial-recognition software, and found a vulnerability.

In a statement announcing the vulnerability, Ngo Tuan Anh, Bkav's Vice President of Cyber Security, said: "Achilles' heel here is Apple let AI at the same time learn a lot of real faces and masks made by Hollywood's and artists. In that way, Apple's AI can only distinguish either a 100% real face or a 100% fake one. So if you create a 'half-real half-fake' face, it can fool Apple's AI".

It has been pointed out that building the mask was not easy, requiring 3D scans of the owner's face, high-resolution 3D printing and multiple attempts to get the spoof right. That means that this is not a vulnerability likely to be used in any common scenario.

In the world of serious cybersecurity, though, unlikely is still possible and that's enough to take a technology out of the candidate pool for security covering high-value individuals and data. For most consumers (and for many users in business scenarios) the facial recognition technology in the iPhone X could be good enough. Before it can be considered a real replacement for more proven multi-factor authentication, though, the facial recognition technology may need more time to mature and improve.

Related posts:

— Curtis Franklin is the editor of SecurityNow.com. Follow him on Twitter @kg4gwa.

Comment  | 
Print  | 
More Insights
Comments
Oldest First  |  Newest First  |  Threaded View
Edge-DRsplash-10-edge-articles
I Smell a RAT! New Cybersecurity Threats for the Crypto Industry
David Trepp, Partner, IT Assurance with accounting and advisory firm BPM LLP,  7/9/2021
News
Attacks on Kaseya Servers Led to Ransomware in Less Than 2 Hours
Robert Lemos, Contributing Writer,  7/7/2021
Commentary
It's in the Game (but It Shouldn't Be)
Tal Memran, Cybersecurity Expert, CYE,  7/9/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
How Machine Learning, AI & Deep Learning Improve Cybersecurity
Machine intelligence is influencing all aspects of cybersecurity. Organizations are implementing AI-based security to analyze event data using ML models that identify attack patterns and increase automation. Before security teams can take advantage of AI and ML tools, they need to know what is possible. This report covers: -How to assess the vendor's AI/ML claims -Defining success criteria for AI/ML implementations -Challenges when implementing AI
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2022-3349
PUBLISHED: 2022-09-28
A vulnerability was found in Sony PS4 and PS5. It has been classified as critical. This affects the function UVFAT_readupcasetable of the component exFAT Handler. The manipulation of the argument dataLength leads to heap-based buffer overflow. It is possible to launch the attack on the physical devi...
CVE-2022-40486
PUBLISHED: 2022-09-28
TP Link Archer AX10 V1 Firmware Version 1.3.1 Build 20220401 Rel. 57450(5553) was discovered to allow authenticated attackers to execute arbitrary code via a crafted backup file.
CVE-2022-2760
PUBLISHED: 2022-09-28
In affected versions of Octopus Deploy it is possible to reveal the Space ID of spaces that the user does not have access to view in an error message when a resource is part of another Space.
CVE-2022-30935
PUBLISHED: 2022-09-28
An authorization bypass in b2evolution allows remote, unauthenticated attackers to predict password reset tokens for any user through the use of a bad randomness function. This allows the attacker to get valid sessions for arbitrary users, and optionally reset their password. Tested and confirmed in...
CVE-2022-32166
PUBLISHED: 2022-09-28
In ovs versions v0.90.0 through v2.5.0 are vulnerable to heap buffer over-read in flow.c. An unsafe comparison of “minimasks� function could lead access to an unmapped region of memory. This vulnerability is capable of crashing the software, memory modification...