Securing AI With Confidential Computing
By enabling secure AI deployments in the cloud without compromising data privacy, confidential computing may become a standard feature in AI services.
September 30, 2024
Confidential computing is a breakthrough technology designed to enhance the security and privacy of data during processing. By leveraging hardware-based and attested trusted execution environments (TEEs), confidential computing helps ensure that sensitive data remains secure, even when in use.
For artificial intelligence (AI), confidential computing is emerging as a key solution to address growing AI-related security and privacy concerns. Confidential computing for GPUs is already available for small to midsized models. As technology advances, Microsoft and NVIDIA plan to offer solutions that will scale to support large language models (LLMs).
Addressing Data Privacy and Sovereignty Concerns
Data privacy and data sovereignty are among the primary concerns for organizations, especially those in the public sector. Governments and institutions handling sensitive data are wary of using conventional AI services due to potential data breaches and misuse.
Confidential AI is emerging as a crucial solution for these scenarios. Confidential AI addresses two primary security goals:
Protection against infrastructure access: Ensuring that AI prompts and data are secure from cloud infrastructure providers, such as Azure, where AI services are hosted.
Protection against service providers: Maintaining privacy from AI service operators, such as OpenAI.
Examining Potential Use Cases
Imagine a pension fund that works with highly sensitive citizen data when processing applications. AI can accelerate the process significantly, but the fund may be hesitant to use existing AI services for fear of data leaks or the information being used for AI training purposes.
Another use case involves large corporations that want to analyze board meeting protocols, which contain highly sensitive information. While they might be tempted to use AI, they refrain from using any existing solutions for such critical data due to privacy concerns.
Confidential AI mitigates these concerns by protecting AI workloads with confidential computing. If applied correctly, confidential computing can effectively prevent access to user prompts. It even becomes possible to ensure that prompts cannot be used for retraining AI models.
Confidential computing achieves this with runtime memory encryption and isolation, as well as remote attestation. The attestation processes use the evidence provided by system components such as hardware, firmware, and software to demonstrate the trustworthiness of the confidential computing environment or program. This provides an additional layer of security and trust.
Overcoming Barriers in Regulated Industries
Sensitive and highly regulated industries such as banking are particularly cautious about adopting AI due to data privacy concerns. Confidential AI can bridge this gap by helping ensure that AI deployments in the cloud are secure and compliant. With confidential computing, banks and other regulated entities may use AI on a large scale without compromising data privacy. This allows them to benefit from AI-driven insights while complying with stringent regulatory requirements.
Securely Running AI Deployments in the Cloud
For organizations that prefer not to invest in on-premises hardware, confidential computing offers a viable alternative. Rather than purchasing and managing physical data centers, which can be costly and complex, companies can use confidential computing to secure their AI deployments in the cloud.
For example, an in-house admin can create a confidential computing environment in Azure using confidential virtual machines (VMs). By installing an open source AI stack and deploying models such as Mistral, Llama, or Phi, organizations can manage their AI deployments securely without the need for extensive hardware investments. This approach eliminates the challenges of managing added physical infrastructure and provides a scalable solution for AI integration.
Building Secure AI Services
Confidential computing not only enables secure migration of self-managed AI deployments to the cloud. It also enables creation of new services that protect user prompts and model weights against the cloud infrastructure and the service provider.
For example, Continuum, a new service offered by Edgeless Systems, leverages Azure confidential VMs with NVIDIA H100 GPUs. Inside the confidential VMs, Continuum runs the AI code within a sandbox based on the open source software gVisor. This architecture allows the Continuum service to lock itself out of the confidential computing environment, preventing AI code from leaking data. In combination with end-to-end remote attestation, this ensures robust protection for user prompts.
Envisioning Confidential AI as a Standard
As confidential AI becomes more prevalent, it's likely that such options will be integrated into mainstream AI services, providing an easy and secure way to utilize AI. This could transform the landscape of AI adoption, making it accessible to a broader range of industries while maintaining high standards of data privacy and security.
Confidential computing offers significant benefits for AI, particularly in addressing data privacy, regulatory compliance, and security concerns. For highly regulated industries, confidential computing will enable entities to harness AI's full potential more securely and effectively. Confidential AI may even become a standard feature in AI services, paving the way for broader adoption and innovation across all sectors.
Learn more about GPU-enhanced confidential VMs on the Microsoft Tech Community.
Read more about:
Sponsor Resource CenterYou May Also Like
DevSecOps/AWS
Oct 17, 2024Social Engineering: New Tricks, New Threats, New Defenses
Oct 23, 202410 Emerging Vulnerabilities Every Enterprise Should Know
Oct 30, 2024Simplify Data Security with Automation
Oct 31, 2024Unleashing AI to Assess Cyber Security Risk
Nov 12, 2024