New Report: Toward AI Security: Global Aspirations for a More Resilient Future
Research from the Center for Long-Term Cybersecurity provides comparative analysis of different nations’ strategic plans for artificial intelligence.
February 13, 2019
PRESS RELEASE
BERKELEY, CALIFORNIA—The Center for Long-Term Cybersecurity (CLTC), a research and collaboration hub at the University of California, Berkeley, has issued a new report that presents a novel framework for navigating the complex landscape of artificial intelligence (AI) security. The report, “Toward AI Security: Global Aspirations for a More Resilient Future,” authored by CLTC Research Fellow Jessica Cussins Newman, provides a comparative analysis of emerging AI strategies and policies from ten countries: Canada, China, France, India, Japan, Singapore, South Korea, the United Arab Emirates, the United Kingdom, and the United States.
“Artificial intelligence may be the most important global issue of the 21st century, and how we navigate the security implications of AI could dramatically shape the future,” Cussins Newman wrote in an introduction to the report. “This report uses the lens of global AI security to investigate the robustness and resiliency of AI systems, as well as the social, political, and economic systems with which AI interacts.”
Recent years have seen a significant increase in government attention to AI, as at least 27 national governments have articulated plans or initiatives for encouraging and managing the development of AI technologies. “Toward AI Security” uses a structured framework—referred to as the AI Security Map—to organize dimensions of security in which AI presents threats and opportunities, including the digital/physical, political, economic, and social domains. The map helps structure key topics that are relevant to AI security, and serves as a tool of comparing different nations’ strategic plans.
“Nations thus far have adopted highly divergent approaches in their AI policies, and there is significant variation in how they are preparing for security threats and opportunities,” Cussins Newman wrote. “For example, only half the strategies surveyed discuss the need for reliable AI systems that are robust against cyberattacks, and only two mention challenges associated with the rise of disinformation and manipulation online.” Other notable findings detailed in the report include:
· Some governments—including those of France, India, and South Korea—are leading the way in acknowledging and preparing for the breadth of disruption likely to result from AI in the future.
· Only two priorities are shared by all ten of the countries surveyed: promoting AI research and development, and updating training and education resources.
· Countries have many additional opportunities to coordinate AI security strategies. For example, most countries are trying to address transparency and accountability of AI as well as privacy, data rights, and ethics. Most countries also prioritize private-public partnerships and call for improving digital infrastructure and government expertise in AI.
· The United States and China share many priorities for advancing AI, including international collaboration; transparency and accountability; updating training and educational resources; private-public partnerships and collaboration; creating reliable AI systems; and promoting the responsible and ethical use of AI in the military.
· Critical gaps in leadership remain around key issues. For example, only two (or fewer) national strategies address inequality, human rights, disinformation and manipulation, and checks against surveillance, control, and abuse of power.
Based on the analysis of the gaps and opportunities in national AI strategies and policies, the report provides a set of five recommendations for policymakers, including 1) facilitating early global coordination around common interests; 2) using government spending to establish best practices; 3) investigating what may be left “on the table”; 4) holding the technology industry accountable; and 5) integrating multi-disciplinary and community input.
“The steps nations take now will shape AI trajectories well into the future,” Cussins Newman wrote, “and those governments working to develop thoughtful strategies that incorporate global and multistakeholder coordination will have an advantage in establishing the international AI agenda and creating a more resilient future.”
For more information—and to download the report—visit https://cltc.berkeley.edu/TowardAISecurity.
Media Contact:
Matthew Nagamine
[email protected]
1-510-664-7506
About the UC Berkeley Center for Long-Term Cybersecurity
The Center for Long-Term Cybersecurity (CLTC) is a research and collaboration hub housed at UC Berkeley’s I School. Founded with a generous starting grant from the Hewlett Foundation in 2015, the center seeks to create effective dialogue among industry, academia, policy makers, and practitioners around a future-oriented conceptualization of cybersecurity — what it could imply and mean for human beings, machines, and the societies that will depend on both. The CLTC serves as an important resource for both students and faculty interested in cybersecurity and is committed to bringing cybersecurity practitioners and scholars to campus for an ongoing dialogue about cybersecurity. Learn more at https://cltc.berkeley.edu.
You May Also Like
Transform Your Security Operations And Move Beyond Legacy SIEM
Nov 6, 2024Unleashing AI to Assess Cyber Security Risk
Nov 12, 2024Securing Tomorrow, Today: How to Navigate Zero Trust
Nov 13, 2024The State of Attack Surface Management (ASM), Featuring Forrester
Nov 15, 2024Applying the Principle of Least Privilege to the Cloud
Nov 18, 2024