Threat Intelligence

2/13/2019
11:20 AM
Dark Reading
Dark Reading
Products and Releases
50%
50%

New Report: Toward AI Security: Global Aspirations for a More Resilient Future

Research from the Center for Long-Term Cybersecurity provides comparative analysis of different nations' strategic plans for artificial intelligence.

BERKELEY, CALIFORNIA—The Center for Long-Term Cybersecurity (CLTC), a research and collaboration hub at the University of California, Berkeley, has issued a new report that presents a novel framework for navigating the complex landscape of artificial intelligence (AI) security. The report, “Toward AI Security: Global Aspirations for a More Resilient Future,” authored by CLTC Research Fellow Jessica Cussins Newman, provides a comparative analysis of emerging AI strategies and policies from ten countries: Canada, China, France, India, Japan, Singapore, South Korea, the United Arab Emirates, the United Kingdom, and the United States.

“Artificial intelligence may be the most important global issue of the 21st century, and how we navigate the security implications of AI could dramatically shape the future,” Cussins Newman wrote in an introduction to the report. “This report uses the lens of global AI security to investigate the robustness and resiliency of AI systems, as well as the social, political, and economic systems with which AI interacts.”

Recent years have seen a significant increase in government attention to AI, as at least 27 national governments have articulated plans or initiatives for encouraging and managing the development of AI technologies. “Toward AI Security” uses a structured framework—referred to as the AI Security Map—to organize dimensions of security in which AI presents threats and opportunities, including the digital/physical, political, economic, and social domains. The map helps structure key topics that are relevant to AI security, and serves as a tool of comparing different nations’ strategic plans.

“Nations thus far have adopted highly divergent approaches in their AI policies, and there is significant variation in how they are preparing for security threats and opportunities,” Cussins Newman wrote. “For example, only half the strategies surveyed discuss the need for reliable AI systems that are robust against cyberattacks, and only two mention challenges associated with the rise of disinformation and manipulation online.” Other notable findings detailed in the report include:

·    Some governments—including those of France, India, and South Korea—are leading the way in acknowledging and preparing for the breadth of disruption likely to result from AI in the future.

·    Only two priorities are shared by all ten of the countries surveyed: promoting AI research and development, and updating training and education resources.

·    Countries have many additional opportunities to coordinate AI security strategies. For example, most countries are trying to address transparency and accountability of AI as well as privacy, data rights, and ethics. Most countries also prioritize private-public partnerships and call for improving digital infrastructure and government expertise in AI.

·    The United States and China share many priorities for advancing AI, including international collaboration; transparency and accountability; updating training and educational resources; private-public partnerships and collaboration; creating reliable AI systems; and promoting the responsible and ethical use of AI in the military.

·    Critical gaps in leadership remain around key issues. For example, only two (or fewer) national strategies address inequality, human rights, disinformation and manipulation, and checks against surveillance, control, and abuse of power.

Based on the analysis of the gaps and opportunities in national AI strategies and policies, the report provides a set of five recommendations for policymakers, including 1) facilitating early global coordination around common interests; 2) using government spending to establish best practices; 3) investigating what may be left “on the table”; 4) holding the technology industry accountable; and 5) integrating multi-disciplinary and community input.

“The steps nations take now will shape AI trajectories well into the future,” Cussins Newman wrote, “and those governments working to develop thoughtful strategies that incorporate global and multistakeholder coordination will have an advantage in establishing the international AI agenda and creating a more resilient future.”

For more information—and to download the report—visit https://cltc.berkeley.edu/TowardAISecurity.

Media Contact:
Matthew Nagamine
[email protected]
1-510-664-7506

About the UC Berkeley Center for Long-Term Cybersecurity

The Center for Long-Term Cybersecurity (CLTC) is a research and collaboration hub housed at UC Berkeley’s I School. Founded with a generous starting grant from the Hewlett Foundation in 2015, the center seeks to create effective dialogue among industry, academia, policy makers, and practitioners around a future-oriented conceptualization of cybersecurity — what it could imply and mean for human beings, machines, and the societies that will depend on both. The CLTC serves as an important resource for both students and faculty interested in cybersecurity and is committed to bringing cybersecurity practitioners and scholars to campus for an ongoing dialogue about cybersecurity. Learn more at https://cltc.berkeley.edu.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Crowdsourced vs. Traditional Pen Testing
Alex Haynes, Chief Information Security Officer, CDL,  3/19/2019
BEC Scammer Pleads Guilty
Dark Reading Staff 3/20/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: This comment is waiting for review by our moderators.
Current Issue
5 Emerging Cyber Threats to Watch for in 2019
Online attackers are constantly developing new, innovative ways to break into the enterprise. This Dark Reading Tech Digest gives an in-depth look at five emerging attack trends and exploits your security team should look out for, along with helpful recommendations on how you can prevent your organization from falling victim.
Flash Poll
The State of Cyber Security Incident Response
The State of Cyber Security Incident Response
Organizations are responding to new threats with new processes for detecting and mitigating them. Here's a look at how the discipline of incident response is evolving.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-6149
PUBLISHED: 2019-03-18
An unquoted search path vulnerability was identified in Lenovo Dynamic Power Reduction Utility prior to version 2.2.2.0 that could allow a malicious user with local access to execute code with administrative privileges.
CVE-2018-15509
PUBLISHED: 2019-03-18
Five9 Agent Desktop Plus 10.0.70 has Incorrect Access Control (issue 2 of 2).
CVE-2018-20806
PUBLISHED: 2019-03-17
Phamm (aka PHP LDAP Virtual Hosting Manager) 0.6.8 allows XSS via the login page (the /public/main.php action parameter).
CVE-2019-5616
PUBLISHED: 2019-03-15
CircuitWerkes Sicon-8, a hardware device used for managing electrical devices, ships with a web-based front-end controller and implements an authentication mechanism in JavaScript that is run in the context of a user's web browser.
CVE-2018-17882
PUBLISHED: 2019-03-15
An Integer overflow vulnerability exists in the batchTransfer function of a smart contract implementation for CryptoBotsBattle (CBTB), an Ethereum token. This vulnerability could be used by an attacker to create an arbitrary amount of tokens for any user.