NIST AI Risk Management Framework Aims to Improve Trustworthiness


View As Web Page



NIST AI Risk Management Framework Aims to Improve Trustworthiness

NIST today released its Artificial Intelligence Risk Management Framework (AI RMF 1.0), a guidance document for voluntary use by organizations designing, developing, deploying or using AI systems to help manage the risks of AI technologies. The Framework seeks to cultivate trust in AI technologies and promote AI innovation while mitigating risk. The AI RMF follows a direction from Congress for NIST to develop the framework and was produced in close collaboration with the private and public sectors over the past 18 months.

AI RMF 1.0 was released at a livestreamed event today with Deputy Secretary of Commerce Don Graves, Under Secretary for Technology and Standards and NIST Director Laurie Locascio, Principal Deputy Director for Science and Society in the White House Office of Science and Technology Policy Alondra Nelson, House Science, Space, and Technology Chairman Frank Lucas and Ranking Member Zoe Lofgren, and panelists representing businesses and civil society. A recording of the event is available here.

NIST also today released, for public comment, a companion voluntary AI RMF Playbook, which suggests ways to navigate and use the framework, a Roadmap for future work to enhance the Framework and its use, and the first two AI RMF 1.0 crosswalks with key AI standards and US and EU documents.

NIST plans to work with the AI community to update the framework periodically and welcomes suggestions for additions and improvements to the Playbook at any time. Comments received through February 2023 will be included in an updated version of the Playbook to be released in spring 2023.

Sign up to receive email notifications about NIST’s AI activities here or contact us at: Also, see information about how to engage in NIST’s broader AI activities.

Read More

NIST Information Technology Laboratory
Questions/Comments about this