NIST Requests Information to Help Develop an AI Risk Management Framework

nist

View as a Web Page

News

NIST Requests Information to Help Develop an AI Risk Management Framework

An illustration for CAMEO AI shows a silhouette of a human head containing icons for computer networks.

As a key step in its effort to manage the risks posed by artificial intelligence (AI), the U.S. Department of Commerce's National Institute of Standards and Technology (NIST) is requesting input from the public that will inform the development of AI risk management guidance.

Responses to the Request for Information (RFI), which appears today in the Federal Register, will help NIST draft an Artificial Intelligence Risk Management Framework (AI RMF), a guidance document for voluntary use intended to help technology developers, users and evaluators improve the trustworthiness of AI systems. The draft AI RMF will answer a direction from Congress for NIST to develop the framework, and it also forms part of NIST’s response to the Executive Order on Maintaining American Leadership in AI.

Read More

IN CASE YOU MISSED IT

NIST Proposes Method for Evaluating User Trust in Artificial Intelligence Systems

May 19, 2021
Every time you speak to a virtual assistant on your smartphone, you are talking to an artificial intelligence — an AI that can, for example, learn your taste in music and make song recommendations that improve based on your interactions. However, AI also assists us with more risk-fraught activities, such as helping doctors diagnose cancer. These are two very different scenarios, but the same issue permeates both: How do we humans decide whether or not to trust a machine’s recommendations?

Read More