Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Endpoint Security

11/15/2017
04:38 PM
Curtis Franklin
Curtis Franklin
Curt Franklin
50%
50%

iPhone's Facial Recognition Shows Cracks

A research firm says that it has successfully spoofed the facial recognition technology used in Apple's flagship iPhone X.

Multi-factor authentication is becoming a "must" for many applications but questions remain about which factors are secure. A recent report from researchers in Vietnam has cast doubts on one promising new factor now available to millions.

In September, Apple announced the iPhone X with much fanfare and a flurry of new technology components. One of the most discussed is its facial recognition technology, which Apple has touted as being convenient, low-friction and very, very secure. Bkav, a security firm based in Vietnam, doesn't dispute the first two qualities but says that the security aspect may be somewhat over-stated.

In a test, researchers at Bkav said that they were able to defeat the iPhone X's facial recognition technology -- technology that Apple claims is not vulnerable to spoofing or mistaken identity -- using a mask made with approximately $150 in materials. While the spoof has yet to be confirmed by other researchers, the possibility raises some discomfiting possibilities.

The most troubling aspect of the demonstration is that the spoof was pulled off using a mask, after Apple went to great pains to show that their technology would only work with the living face of the device owner. In a blog post, Bkav said that they listened carefully to Apple's statements, worked to understand the AI used in the facial-recognition software, and found a vulnerability.

In a statement announcing the vulnerability, Ngo Tuan Anh, Bkav's Vice President of Cyber Security, said: "Achilles' heel here is Apple let AI at the same time learn a lot of real faces and masks made by Hollywood's and artists. In that way, Apple's AI can only distinguish either a 100% real face or a 100% fake one. So if you create a 'half-real half-fake' face, it can fool Apple's AI".

It has been pointed out that building the mask was not easy, requiring 3D scans of the owner's face, high-resolution 3D printing and multiple attempts to get the spoof right. That means that this is not a vulnerability likely to be used in any common scenario.

In the world of serious cybersecurity, though, unlikely is still possible and that's enough to take a technology out of the candidate pool for security covering high-value individuals and data. For most consumers (and for many users in business scenarios) the facial recognition technology in the iPhone X could be good enough. Before it can be considered a real replacement for more proven multi-factor authentication, though, the facial recognition technology may need more time to mature and improve.

Related posts:

— Curtis Franklin is the editor of SecurityNow.com. Follow him on Twitter @kg4gwa.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 8/10/2020
Researcher Finds New Office Macro Attacks for MacOS
Curtis Franklin Jr., Senior Editor at Dark Reading,  8/7/2020
Digital Clones Could Cause Problems for Identity Systems
Robert Lemos, Contributing Writer,  8/8/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Special Report: Computing's New Normal, a Dark Reading Perspective
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
The Changing Face of Threat Intelligence
The Changing Face of Threat Intelligence
This special report takes a look at how enterprises are using threat intelligence, as well as emerging best practices for integrating threat intel into security operations and incident response. Download it today!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-8913
PUBLISHED: 2020-08-12
A local, arbitrary code execution vulnerability exists in the SplitCompat.install endpoint in Android's Play Core Library versions prior to 1.7.2. A malicious attacker could create an apk which targets a specific application, and if a victim were to install this apk, the attacker could perform a dir...
CVE-2020-7029
PUBLISHED: 2020-08-11
A Cross-Site Request Forgery (CSRF) vulnerability was discovered in the System Management Interface Web component of Avaya Aura Communication Manager and Avaya Aura Messaging. This vulnerability could allow an unauthenticated remote attacker to perform Web administration actions with the privileged ...
CVE-2020-17489
PUBLISHED: 2020-08-11
An issue was discovered in certain configurations of GNOME gnome-shell through 3.36.4. When logging out of an account, the password box from the login dialog reappears with the password still visible. If the user had decided to have the password shown in cleartext at login time, it is then visible f...
CVE-2020-17495
PUBLISHED: 2020-08-11
django-celery-results through 1.2.1 stores task results in the database. Among the data it stores are the variables passed into the tasks. The variables may contain sensitive cleartext information that does not belong unencrypted in the database.
CVE-2020-0260
PUBLISHED: 2020-08-11
There is a possible out of bounds read due to an incorrect bounds check.Product: AndroidVersions: Android SoCAndroid ID: A-152225183