Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Threat Intelligence

2/13/2019
11:20 AM
Dark Reading
Dark Reading
Products and Releases
50%
50%

New Report: Toward AI Security: Global Aspirations for a More Resilient Future

Research from the Center for Long-Term Cybersecurity provides comparative analysis of different nations' strategic plans for artificial intelligence.

BERKELEY, CALIFORNIA—The Center for Long-Term Cybersecurity (CLTC), a research and collaboration hub at the University of California, Berkeley, has issued a new report that presents a novel framework for navigating the complex landscape of artificial intelligence (AI) security. The report, “Toward AI Security: Global Aspirations for a More Resilient Future,” authored by CLTC Research Fellow Jessica Cussins Newman, provides a comparative analysis of emerging AI strategies and policies from ten countries: Canada, China, France, India, Japan, Singapore, South Korea, the United Arab Emirates, the United Kingdom, and the United States.

“Artificial intelligence may be the most important global issue of the 21st century, and how we navigate the security implications of AI could dramatically shape the future,” Cussins Newman wrote in an introduction to the report. “This report uses the lens of global AI security to investigate the robustness and resiliency of AI systems, as well as the social, political, and economic systems with which AI interacts.”

Recent years have seen a significant increase in government attention to AI, as at least 27 national governments have articulated plans or initiatives for encouraging and managing the development of AI technologies. “Toward AI Security” uses a structured framework—referred to as the AI Security Map—to organize dimensions of security in which AI presents threats and opportunities, including the digital/physical, political, economic, and social domains. The map helps structure key topics that are relevant to AI security, and serves as a tool of comparing different nations’ strategic plans.

“Nations thus far have adopted highly divergent approaches in their AI policies, and there is significant variation in how they are preparing for security threats and opportunities,” Cussins Newman wrote. “For example, only half the strategies surveyed discuss the need for reliable AI systems that are robust against cyberattacks, and only two mention challenges associated with the rise of disinformation and manipulation online.” Other notable findings detailed in the report include:

·    Some governments—including those of France, India, and South Korea—are leading the way in acknowledging and preparing for the breadth of disruption likely to result from AI in the future.

·    Only two priorities are shared by all ten of the countries surveyed: promoting AI research and development, and updating training and education resources.

·    Countries have many additional opportunities to coordinate AI security strategies. For example, most countries are trying to address transparency and accountability of AI as well as privacy, data rights, and ethics. Most countries also prioritize private-public partnerships and call for improving digital infrastructure and government expertise in AI.

·    The United States and China share many priorities for advancing AI, including international collaboration; transparency and accountability; updating training and educational resources; private-public partnerships and collaboration; creating reliable AI systems; and promoting the responsible and ethical use of AI in the military.

·    Critical gaps in leadership remain around key issues. For example, only two (or fewer) national strategies address inequality, human rights, disinformation and manipulation, and checks against surveillance, control, and abuse of power.

Based on the analysis of the gaps and opportunities in national AI strategies and policies, the report provides a set of five recommendations for policymakers, including 1) facilitating early global coordination around common interests; 2) using government spending to establish best practices; 3) investigating what may be left “on the table”; 4) holding the technology industry accountable; and 5) integrating multi-disciplinary and community input.

“The steps nations take now will shape AI trajectories well into the future,” Cussins Newman wrote, “and those governments working to develop thoughtful strategies that incorporate global and multistakeholder coordination will have an advantage in establishing the international AI agenda and creating a more resilient future.”

For more information—and to download the report—visit https://cltc.berkeley.edu/TowardAISecurity.

Media Contact:
Matthew Nagamine
[email protected]
1-510-664-7506

About the UC Berkeley Center for Long-Term Cybersecurity

The Center for Long-Term Cybersecurity (CLTC) is a research and collaboration hub housed at UC Berkeley’s I School. Founded with a generous starting grant from the Hewlett Foundation in 2015, the center seeks to create effective dialogue among industry, academia, policy makers, and practitioners around a future-oriented conceptualization of cybersecurity — what it could imply and mean for human beings, machines, and the societies that will depend on both. The CLTC serves as an important resource for both students and faculty interested in cybersecurity and is committed to bringing cybersecurity practitioners and scholars to campus for an ongoing dialogue about cybersecurity. Learn more at https://cltc.berkeley.edu.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
DevSecOps: The Answer to the Cloud Security Skills Gap
Lamont Orange, Chief Information Security Officer at Netskope,  11/15/2019
Attackers' Costs Increasing as Businesses Focus on Security
Robert Lemos, Contributing Writer,  11/15/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Navigating the Deluge of Security Data
In this Tech Digest, Dark Reading shares the experiences of some top security practitioners as they navigate volumes of security data. We examine some examples of how enterprises can cull this data to find the clues they need.
Flash Poll
Rethinking Enterprise Data Defense
Rethinking Enterprise Data Defense
Frustrated with recurring intrusions and breaches, cybersecurity professionals are questioning some of the industrys conventional wisdom. Heres a look at what theyre thinking about.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-19037
PUBLISHED: 2019-11-21
ext4_empty_dir in fs/ext4/namei.c in the Linux kernel through 5.3.12 allows a NULL pointer dereference because ext4_read_dirblock(inode,0,DIRENT_HTREE) can be zero.
CVE-2019-19036
PUBLISHED: 2019-11-21
btrfs_root_node in fs/btrfs/ctree.c in the Linux kernel through 5.3.12 allows a NULL pointer dereference because rcu_dereference(root->node) can be zero.
CVE-2019-19039
PUBLISHED: 2019-11-21
__btrfs_free_extent in fs/btrfs/extent-tree.c in the Linux kernel through 5.3.12 calls btrfs_print_leaf in a certain ENOENT case, which allows local users to obtain potentially sensitive information about register values via the dmesg program.
CVE-2019-6852
PUBLISHED: 2019-11-20
A CWE-200: Information Exposure vulnerability exists in Modicon Controllers (M340 CPUs, M340 communication modules, Premium CPUs, Premium communication modules, Quantum CPUs, Quantum communication modules - see security notification for specific versions), which could cause the disclosure of FTP har...
CVE-2019-6853
PUBLISHED: 2019-11-20
A CWE-79: Failure to Preserve Web Page Structure vulnerability exists in Andover Continuum (models 9680, 5740 and 5720, bCX4040, bCX9640, 9900, 9940, 9924 and 9702) , which could enable a successful Cross-site Scripting (XSS attack) when using the products web server.