NIST Tackles AI
But to prepare for something usually means you have an idea about what you are preparing for, no?
On February 11, 2019, President Donald J. Trump issued the Executive Order on Maintaining American Leadership in Artificial Intelligence. It was crafted to get NIST off its "safe space" and get into the AI standards arena.
The EO specifically directs NIST to create "a plan for Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies."
At that time, NIST said, "NIST will work other federal agencies to support the EO's principles of increasing federal investment in AI research and development, expanding access to data and computing resources, preparing the American workforce for the changes AI will bring, and protecting the United States' advantage in AI technologies."
Wait, what?
NIST will prepare the American workforce for the "changes AI will bring"? To prepare for something usually means you have an idea about what you are preparing for, no?
A threat model, so to speak. How -- exactly -- does NIST know what that AI-caused changes will be be? You know, so that they can prepare for them.
Well, they just issued a plan they call "U.S. LEADERSHIP IN AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools." They say that it was prepared with broad public and private sector input.
The plan brings up the concept of trustworthiness as a part of the standards process. NIST sees the need for trustworthiness standards which include "guidance and requirements for accuracy, explainability, resiliency, safety, reliability, objectivity, and security."
The plan realizes that fiat standards won't actually do much. As the plan puts it, "Standards should be complemented by related tools to advance the development and adoption of effective, reliable, robust, and trustworthy AI technologies."
Some real tool that people can use might help adoption of a standard, quite right. But describing one potential area of interest as "Tools for capturing and representing knowledge, and reasoning in AI systems" is so broad in scope that it lacks actualization details. Federal agency adoption of standards is planned for, as well. "The plan," it says, "provides a series of practical steps for agencies to take as they decide about engaging in AI standards. It groups potential agency involvement into four categories ranked from least- to most-engaged: monitoring, participation, influencing, and leading."
Things are more specific in some of the recommendations the plan makes. The plan has a goal to "Promote focused research to advance and accelerate broader exploration and understanding of how aspects of trustworthiness can be practically incorporated within standards and standards-related tools."
There's that trustworthy word again. If you need the research to get it, it doesn't seem that you have it right now. Maybe NIST is promoting the idea that demonstration of trust (or a token of trust) should be mandatory, rather than optional.
How the broad brush strokes of the plan are actually implemented by individual agencies is yet to be seen. The topic is important as the field moves forward, and the need for common efforts is great.
— Larry Loeb has written for many of the last century's major "dead tree" computer magazines, having been, among other things, a consulting editor for BYTE magazine and senior editor for the launch of WebWeek.
Read more about:
Security NowAbout the Author
You May Also Like
DevSecOps/AWS
Oct 17, 2024Social Engineering: New Tricks, New Threats, New Defenses
Oct 23, 202410 Emerging Vulnerabilities Every Enterprise Should Know
Oct 30, 2024Simplify Data Security with Automation
Oct 31, 2024