News, news analysis, and commentary on the latest trends in cybersecurity technology.
Apple's AI Offering Makes Big Privacy Promises
Apple's guarantee of privacy on every AI transaction could influence trustworthy AI deployments.
June 14, 2024
Apple has long styled itself as the company that cares about privacy, and it continues to tout its privacy bona fides with Apple Intelligence.
At this week's Worldwide Developer Conference, the company announced Apple Intelligence, its secure artificial intelligence (AI) system, and plans to integrate AI across its devices and applications. The AI features will be used to upgrade personal assistant features in Apple devices and provide assistance with writing and image manipulation.
While a lightweight AI model runs locally on Apple devices, more complex queries will go to the cloud. Apple will assess the complexity and computing power required by an AI query and decide whether to process it on-device or send it to the cloud. Data won’t leave the device if it is processed there, and layers of security protect the information sent to the cloud.
Within this secure AI system, user data is not visible to anyone, even Apple, and is deleted once a query is answered, the company said.
"Your data is never stored or made accessible to Apple. It's used exclusively to fulfill your request," said Craig Federighi, senior vice president of software engineering at Apple, during a WWDC keynote.
Trusted AI Deployments?
Apple's guarantee of privacy on every AI transaction — whether on-device or in the cloud — is ambitious and could influence trustworthy AI deployments.
Apple's hardware and software implementation to guarantee privacy on every AI transaction sets a high bar on zero-trust infrastructure that competitors may try to match, says James Sanders, an analyst at chip research firm TechInsights.
Sanders notes there is a "growing trust problem" with LLMs, especially as top AI providers collect information from users to train large language models or improve services. Google collects user data from across its vast portfolio of products to boost its capabilities. It also includes human reviewers to evaluate answers with the ultimate goal to improve its Gemini AI service.
Apple did not announce its own AI infrastructure and instead will be integrating with OpenAI. Apple left open the possibility of working with others as well. Users will be prompted to agree to send user data to OpenAI.
"I do wonder to what extent this is going to put pressure on Microsoft to adopt a similar method," Sanders says.
Building an AI Walled Garden
As part of its investments, the company has built a trustworthy cloud infrastructure, named Private Compute Cloud, with new servers and chips to handle AI queries that demand more computing power. Apple's secure boot and secure enclaves protect data.
"These models run on servers … specially created using Apple silicon," Apple’s Federighi said. "These Apple silicon servers offer the privacy and security of your iPhone from the silicon on up."
Private data can be easily stolen en route or from the cloud, but Apple is using the latest hardware and software techniques to protect the data, said Matthew Green, associate professor at Johns Hopkins' computer science department. Apple devices generate encryption keys, which authenticates devices and cloud connections. The request is then sent to a secure enclave on Apple's servers, where the requests are processed.
Apple is also taking a serverless approach, in which data is deleted once the answer is sent back. New keys are generated each time a system reboots. The operating system also has an attestation system to verify users and software images.
"Specifically, it signs a hash of the software and shares this with every phone/client," Green said. "If you trust this infrastructure, you'll know it's running a specific piece of software."
Apple controls nearly every major piece of software and hardware in its infrastructure, which makes it easier for them to build a strong privacy wall around its AI service, said Alex Matrosov, CEO of security company Binarly.
"I really liked how they frame all the requirements and specifically use their own silicon," Matrosov said. "Once they control everything, they can guarantee the chain of trust from the silicon to the cloud."
Rival providers don’t have nearly the same level of control over their AI infrastructure. Unlike Apple, they can’t lock down security as queries pass through various hardware and software layers, Matrosov says. For example, OpenAI and Microsoft process queries through GPUs from Nvidia, which handles vulnerability discovery and patching.
"If Apple sets the standard, the effect will be, 'Why should I buy Android if I don’t care about the privacy?'" Matrosov said. "The next step will be Google following up and trying to maybe implement or do the similar thing."
About the Author
You May Also Like
DevSecOps/AWS
Oct 17, 2024Social Engineering: New Tricks, New Threats, New Defenses
Oct 23, 202410 Emerging Vulnerabilities Every Enterprise Should Know
Oct 30, 2024Simplify Data Security with Automation
Oct 31, 2024Unleashing AI to Assess Cyber Security Risk
Nov 12, 2024