Cybersecurity insights from industry experts.
Key Building Blocks to Advance American Leadership in AI
AI has tremendous potential to improve efficiency and outcomes in the public and private sectors. A holistic approach to AI and security is critical to achieving the potential of AI while minimizing the risks.
The AI era is set to be a time of significant change for technological and information security. To guide the development and deployment of AI tools in a way that embraces their benefits while safeguarding against potential risks, the US government has outlined a set of voluntary commitments they are asking companies to make. The focus areas for these voluntary commitments are:
Safety. The government encourages internal and external red-teaming, as well as open information sharing about potential risks.
Security. Companies should invest in proper cybersecurity measures to protect their models and offer incentives for third parties to report vulnerabilities in responsible ways.
Trust. Develop tools to identify if content is AI-generated and prioritize research on ways AI could be harmful at a societal level to mitigate those harms.
Google signed on to these voluntary commitments from the White House, and we are making specific, documented progress towards each of these three goals. Responsible AI development and deployment will require close work between industry leaders and the government. To advance that goal, Google, along with several other organizations, partnered to host a forum in October to discuss AI and security.
As part of the October AI security forum, we discussed a new Google report focused on AI in the US public sector: Building a Secure Foundation for American Leadership in AI. This whitepaper highlights how Google has already worked with government organizations to improve outcomes, accessibility, and efficiency. The report advocates for a holistic approach to security and explains the opportunities a secure AI foundation will provide to the public sector.
The Potential of Secure AI
Security can often feel like a race, as technology providers need to consider the risks and vulnerabilities of new developments before attacks occur. Since we are still early in the era of public availability of AI tools, organizations can establish safeguards and defenses before AI-enhanced threats become widespread. However, that window of opportunity won't last forever.
The potential use of AI to power social engineering attacks and to create manipulated images and video for malicious purposes is a threat that will only become more pressing as technology advances, which is why AI developers must prioritize the trust tools outlined as part of the White House's voluntary commitments.
But while the threats are real, it's also essential to recognize the positive potential of AI, especially when it's developed and deployed securely. AI is already transforming how people learn and build new skills, and the responsible use of AI tools in both the public and private sectors can significantly improve worker efficiency and the outcomes for the end user.
Google has been working with US government agencies and related organizations to securely deploy AI in ways that advance key national priorities. AI can help improve access to healthcare, responding to patient questions by drawing on a knowledge base built from disparate data sets. AI also has the potential to revolutionize civic engagement, automatically summarizing relevant information from meetings and providing constituents with answers in clear language.
Three Key Building Blocks for Secure AI
At the October AI forum, Google presented three key organizational building blocks to maximize the benefits of AI tools in the US.
First, it's essential to understand how threat actors currently use AI capabilities and how those uses are likely to evolve. As Mandiant has identified, threat actors will likely use AI technologies in two significant ways: "the efficient scaling of activity beyond the actors' inherent means; and their ability to produce realistic fabricated content toward deceptive ends." Keeping those risks in mind will help tech and government leaders prioritize research and the development of mitigation techniques.
Second, organizations should deploy secure AI systems. This can be achieved by following guidelines such as the White House's recommendations and Google's Secure AI Framework (SAIF). The SAIF includes six core elements, including deploying automated security measures and creating faster feedback loops for AI development.
Finally, security leaders should take advantage of all the ways AI can help enhance and supercharge security. AI technologies can simplify security tools and controls while also making them faster and more effective, all of which will help defend against the potential increase in adversarial attacks AI systems may enable.
These three building blocks can form the basis for the secure, effective implementation of AI technologies across American society. By encouraging AI development leaders and government officials to keep working together, we will all benefit from the enhancements that safe and trustworthy AI systems will bring to the public and private sectors.
Read more Partner Perspectives from Google Cloud.
Read more about:
Partner PerspectivesAbout the Author
You May Also Like
Cybersecurity Day: How to Automate Security Analytics with AI and ML
Dec 17, 2024The Dirt on ROT Data
Dec 18, 2024