This is the first formal step in writing the standards that will guide the implementation of AI technologies within the federal government.

Dark Reading Staff, Dark Reading

July 9, 2019

1 Min Read

The National Institute of Standards and Technology (NIST) has issued a draft guideline for developing artificial intelligence (AI) technical standards, the first major, formal step in writing the standards that will guide the procurement and implementation of AI and machine learning technologies within the federal government. And because many private organizations base their decisions on NIST documents, those standards could have repercussions that reach far beyond government purchasing.

Within the draft guideline are sections that deal with a wide variety of topics around AI, including how AI applications are developed, how AI is explained to stakeholders and the public, and how AI applications are used. Security plays a role in several aspects of the proposal, from how to build "trustworthy" AI applications to ensuring that AI's use takes both proper security and proper concern for privacy into account.

The NIST Guideline has been developed as part of the response to the American AI Initiative, established by executive order in February. Within five key areas of emphasis set out in the order, one called for NIST "to lead the development of appropriate technical standards for reliable, robust, trustworthy, secure, portable, and interoperable AI systems." Formal comments on the draft are being accepted through July 19.

For more, read here.

About the Author(s)

Dark Reading Staff

Dark Reading

Dark Reading is a leading cybersecurity media site.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights