Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Security Management

9/16/2019
07:00 AM
Larry Loeb
Larry Loeb
Larry Loeb
50%
50%

NIST Tackles AI

But to prepare for something usually means you have an idea about what you are preparing for, no?

On February 11, 2019, President Donald J. Trump issued the Executive Order on Maintaining American Leadership in Artificial Intelligence. It was crafted to get NIST off its "safe space" and get into the AI standards arena.

The EO specifically directs NIST to create "a plan for Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies."

At that time, NIST said, "NIST will work other federal agencies to support the EO's principles of increasing federal investment in AI research and development, expanding access to data and computing resources, preparing the American workforce for the changes AI will bring, and protecting the United States' advantage in AI technologies."

Wait, what?

NIST will prepare the American workforce for the "changes AI will bring"? To prepare for something usually means you have an idea about what you are preparing for, no?

A threat model, so to speak. How -- exactly -- does NIST know what that AI-caused changes will be be? You know, so that they can prepare for them.

Well, they just issued a plan they call "U.S. LEADERSHIP IN AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools." They say that it was prepared with broad public and private sector input.

The plan brings up the concept of trustworthiness as a part of the standards process. NIST sees the need for trustworthiness standards which include "guidance and requirements for accuracy, explainability, resiliency, safety, reliability, objectivity, and security."

The plan realizes that fiat standards won't actually do much. As the plan puts it, "Standards should be complemented by related tools to advance the development and adoption of effective, reliable, robust, and trustworthy AI technologies."

Some real tool that people can use might help adoption of a standard, quite right. But describing one potential area of interest as "Tools for capturing and representing knowledge, and reasoning in AI systems" is so broad in scope that it lacks actualization details. Federal agency adoption of standards is planned for, as well. "The plan," it says, "provides a series of practical steps for agencies to take as they decide about engaging in AI standards. It groups potential agency involvement into four categories ranked from least- to most-engaged: monitoring, participation, influencing, and leading."

Things are more specific in some of the recommendations the plan makes. The plan has a goal to "Promote focused research to advance and accelerate broader exploration and understanding of how aspects of trustworthiness can be practically incorporated within standards and standards-related tools."

There's that trustworthy word again. If you need the research to get it, it doesn't seem that you have it right now. Maybe NIST is promoting the idea that demonstration of trust (or a token of trust) should be mandatory, rather than optional.

How the broad brush strokes of the plan are actually implemented by individual agencies is yet to be seen. The topic is important as the field moves forward, and the need for common efforts is great.

— Larry Loeb has written for many of the last century's major "dead tree" computer magazines, having been, among other things, a consulting editor for BYTE magazine and senior editor for the launch of WebWeek.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 7/1/2020
Ripple20 Threatens Increasingly Connected Medical Devices
Kelly Sheridan, Staff Editor, Dark Reading,  6/30/2020
DDoS Attacks Jump 542% from Q4 2019 to Q1 2020
Dark Reading Staff 6/30/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
The Threat from the Internet--and What Your Organization Can Do About It
This report describes some of the latest attacks and threats emanating from the Internet, as well as advice and tips on how your organization can mitigate those threats before they affect your business. Download it today!
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-15478
PUBLISHED: 2020-07-01
The Journal theme before 3.1.0 for OpenCart allows exposure of sensitive data via SQL errors.
CVE-2020-6261
PUBLISHED: 2020-07-01
SAP Solution Manager (Trace Analysis), version 7.20, allows an attacker to perform a log injection into the trace file, due to Incomplete XML Validation. The readability of the trace file is impaired.
CVE-2020-15471
PUBLISHED: 2020-07-01
In nDPI through 3.2, the packet parsing code is vulnerable to a heap-based buffer over-read in ndpi_parse_packet_line_info in lib/ndpi_main.c.
CVE-2020-15472
PUBLISHED: 2020-07-01
In nDPI through 3.2, the H.323 dissector is vulnerable to a heap-based buffer over-read in ndpi_search_h323 in lib/protocols/h323.c, as demonstrated by a payload packet length that is too short.
CVE-2020-15473
PUBLISHED: 2020-07-01
In nDPI through 3.2, the OpenVPN dissector is vulnerable to a heap-based buffer over-read in ndpi_search_openvpn in lib/protocols/openvpn.c.