Sponsored By

Apple 'Lockdown Mode' Attack Subverts Key iPhone Security Feature

Even the most severe security protections for mobile phones aren't all-encompassing or foolproof, as a tactic involving a spoof of lockdown mode shows.

4 Min Read
Apple logo sign
Source: Takatoshi Kurikawa via Alamy Stock Photo

Researchers have discovered a way to subvert "Lockdown Mode," Apple's most stringent security protection for iOS.

The company first introduced Lockdown Mode last year, after a marked increase in nation-state-developed, zero-click exploits for iPhones. The new feature was designed to protect particularly vulnerable users — for example, activists and journalists in the crosshairs of dictatorships — by shutting off or otherwise significantly reducing features of the device that hackers love best.

In practice, however, this mode turns on a small number of identifiable functions, only some of which are newly protected within the device's kernel. As a result, on Dec. 5, analysts from Jamf Threat Labs were able to demonstrate how to subvert Lockdown Mode, delivering a like-for-like user experience while still allowing cyberattacks to persist underneath the surface.

Fake Lockdown Mode

"The important thing to remember is that lockdown mode is not malware prevention," explains Michael Covington, vice president of portfolio strategy at Jamf. "It's not a malware detection tool. It's not something that can block malware that's already installed. And it can't limit the efficacy of malware, and it doesn't stop data exfiltration or communication with command and control."

Instead, it's designed to massively reduce the available surface within which attackers can gain an initial foothold into the device. It does this by, for example, removing support for file formats popular in cyberattacks, disabling certain convenience features — like the preview window associated with links shared in iMessage — and restricting Web browsing with captive portals.

If an attacker has already compromised a device, Apple's lockdown mode won't boot them out. It can make persistence more difficult, though, which is where the Jamf proof-of-concept (PoC) comes in.

By identifying and manipulating just a few bits of code responsible for triggering and maintaining lockdown mode, the Jamf researchers were able to disable it, while simultaneously presenting the user with visual cues mimicking all of lockdown mode's typical identifying traits.

For example, they replaced the method responsible for executing Lockdown with a file — '/fakelockdownmode_on' — which triggered a restart in the user space. They mimicked lockdown in Safari by hooking the function responsible for turning on the captive portal Web engine, and hooking the function responsible for displaying the status of lockdown mode in the first place.

These tricks are more difficult to pull off, though, as of iOS 17, when Apple elevated lockdown mode to the kernel. "This strategic move is a great step in enhancing security," the researchers wrote. Not only is kernel-level code more heavily protected than code in the user space, but, importantly, "changes made by lockdown mode in the kernel typically cannot be undone without undergoing a system reboot, thanks to existing security mitigations." Such a reboot might spell doom for an attacker's persistence.

An Industry-Wide Security Blind Spot

Few people will find themselves needing to use lockdown mode. But the point of the story really has little to do with this particular exploit, or even the entire subject of lockdown mode.

"There's so much focus in the security research community around named attacks. Everybody's interested in Pegasus. We're also really interested in very specific attack vectors, like phishing attacks. There hasn't been a lot of study on the different techniques that get utilized by malware to maintain persistence on a device and to not draw attention to the fact that it's running and potentially doing some damaging things to the user or the device," Covington explains.

The result is that some areas of security get loads of attention, where other potentially crucial areas fall through the cracks.

"We've done such a good job of training users to look for phishing attacks in company email — everybody is really suspicious of emails or text messages that they've received from unknown parties, especially if they have links," he explains. "I think we now need to train our workers to also look for other indicators that their devices may be compromised, so they can raise the red flag."

Covington recommends keeping a keen eye out during performance issues, or whenever a UI element seems out of place. "It's really important that people go about their days with the mindset that they should be questioning everything that they see," he says.

About the Author(s)

Nate Nelson, Contributing Writer

Nate Nelson is a freelance writer based in New York City. Formerly a reporter at Threatpost, he contributes to a number of cybersecurity blogs and podcasts. He writes "Malicious Life" -- an award-winning Top 20 tech podcast on Apple and Spotify -- and hosts every other episode, featuring interviews with leading voices in security. He also co-hosts "The Industrial Security Podcast," the most popular show in its field.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights