Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Security Management

9/16/2019
07:00 AM
Larry Loeb
Larry Loeb
Larry Loeb
50%
50%

NIST Tackles AI

But to prepare for something usually means you have an idea about what you are preparing for, no?

On February 11, 2019, President Donald J. Trump issued the Executive Order on Maintaining American Leadership in Artificial Intelligence. It was crafted to get NIST off its "safe space" and get into the AI standards arena.

The EO specifically directs NIST to create "a plan for Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies."

At that time, NIST said, "NIST will work other federal agencies to support the EO's principles of increasing federal investment in AI research and development, expanding access to data and computing resources, preparing the American workforce for the changes AI will bring, and protecting the United States' advantage in AI technologies."

Wait, what?

NIST will prepare the American workforce for the "changes AI will bring"? To prepare for something usually means you have an idea about what you are preparing for, no?

A threat model, so to speak. How -- exactly -- does NIST know what that AI-caused changes will be be? You know, so that they can prepare for them.

Well, they just issued a plan they call "U.S. LEADERSHIP IN AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools." They say that it was prepared with broad public and private sector input.

The plan brings up the concept of trustworthiness as a part of the standards process. NIST sees the need for trustworthiness standards which include "guidance and requirements for accuracy, explainability, resiliency, safety, reliability, objectivity, and security."

The plan realizes that fiat standards won't actually do much. As the plan puts it, "Standards should be complemented by related tools to advance the development and adoption of effective, reliable, robust, and trustworthy AI technologies."

Some real tool that people can use might help adoption of a standard, quite right. But describing one potential area of interest as "Tools for capturing and representing knowledge, and reasoning in AI systems" is so broad in scope that it lacks actualization details. Federal agency adoption of standards is planned for, as well. "The plan," it says, "provides a series of practical steps for agencies to take as they decide about engaging in AI standards. It groups potential agency involvement into four categories ranked from least- to most-engaged: monitoring, participation, influencing, and leading."

Things are more specific in some of the recommendations the plan makes. The plan has a goal to "Promote focused research to advance and accelerate broader exploration and understanding of how aspects of trustworthiness can be practically incorporated within standards and standards-related tools."

There's that trustworthy word again. If you need the research to get it, it doesn't seem that you have it right now. Maybe NIST is promoting the idea that demonstration of trust (or a token of trust) should be mandatory, rather than optional.

How the broad brush strokes of the plan are actually implemented by individual agencies is yet to be seen. The topic is important as the field moves forward, and the need for common efforts is great.

— Larry Loeb has written for many of the last century's major "dead tree" computer magazines, having been, among other things, a consulting editor for BYTE magazine and senior editor for the launch of WebWeek.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Manchester United Suffers Cyberattack
Dark Reading Staff 11/23/2020
As 'Anywhere Work' Evolves, Security Will Be Key Challenge
Robert Lemos, Contributing Writer,  11/23/2020
Cloud Security Startup Lightspin Emerges From Stealth
Kelly Sheridan, Staff Editor, Dark Reading,  11/24/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win an Amazon Gift Card! Click Here
Latest Comment: This comment is waiting for review by our moderators.
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-20934
PUBLISHED: 2020-11-28
An issue was discovered in the Linux kernel before 5.2.6. On NUMA systems, the Linux fair scheduler has a use-after-free in show_numa_stats() because NUMA fault statistics are inappropriately freed, aka CID-16d51a590a8c.
CVE-2020-29368
PUBLISHED: 2020-11-28
An issue was discovered in __split_huge_pmd in mm/huge_memory.c in the Linux kernel before 5.7.5. The copy-on-write implementation can grant unintended write access because of a race condition in a THP mapcount check, aka CID-c444eb564fb1.
CVE-2020-29369
PUBLISHED: 2020-11-28
An issue was discovered in mm/mmap.c in the Linux kernel before 5.7.11. There is a race condition between certain expand functions (expand_downwards and expand_upwards) and page-table free operations from an munmap call, aka CID-246c320a8cfe.
CVE-2020-29370
PUBLISHED: 2020-11-28
An issue was discovered in kmem_cache_alloc_bulk in mm/slub.c in the Linux kernel before 5.5.11. The slowpath lacks the required TID increment, aka CID-fd4d9c7d0c71.
CVE-2020-29371
PUBLISHED: 2020-11-28
An issue was discovered in romfs_dev_read in fs/romfs/storage.c in the Linux kernel before 5.8.4. Uninitialized memory leaks to userspace, aka CID-bcf85fcedfdd.