NIST Sets Draft Guidelines for Government AINIST Sets Draft Guidelines for Government AI
This is the first formal step in writing the standards that will guide the implementation of AI technologies within the federal government.
July 9, 2019
The National Institute of Standards and Technology (NIST) has issued a draft guideline for developing artificial intelligence (AI) technical standards, the first major, formal step in writing the standards that will guide the procurement and implementation of AI and machine learning technologies within the federal government. And because many private organizations base their decisions on NIST documents, those standards could have repercussions that reach far beyond government purchasing.
Within the draft guideline are sections that deal with a wide variety of topics around AI, including how AI applications are developed, how AI is explained to stakeholders and the public, and how AI applications are used. Security plays a role in several aspects of the proposal, from how to build "trustworthy" AI applications to ensuring that AI's use takes both proper security and proper concern for privacy into account.
The NIST Guideline has been developed as part of the response to the American AI Initiative, established by executive order in February. Within five key areas of emphasis set out in the order, one called for NIST "to lead the development of appropriate technical standards for reliable, robust, trustworthy, secure, portable, and interoperable AI systems." Formal comments on the draft are being accepted through July 19.
For more, read here.
About the Author(s)
You May Also Like
Hacking Your Digital Identity: How Cybercriminals Can and Will Get Around Your Authentication MethodsOct 26, 2023
Modern Supply Chain Security: Integrated, Interconnected, and Context-DrivenNov 06, 2023
How to Combat the Latest Cloud Security ThreatsNov 06, 2023
Reducing Cyber Risk in Enterprise Email Systems: It's Not Just Spam and PhishingNov 01, 2023
SecOps & DevSecOps in the CloudNov 06, 2023