Empower AI Announces Jeff Bohling as CEO READ MORE
As reported recently by Federal News Network (NIST framework looks to accelerate AI innovation while reducing risk), the National Institute of Standards and Technology (NIST) has offered guidance related to the “responsible use of artificial intelligence tools” for many U.S. industries.
NIST, whose mission is to promote American innovation and industrial competitiveness, released its long-awaited AI Risk Management Framework on Jan. 26. The article highlights the fact that the framework offers public and private-sector organizations new criteria on how to maximize the reliability and trustworthiness of AI tools, and is the latest in a series of recent federal policies meant to regulate an emerging technology that’s rapidly evolving, but fraught with challenges.
“AI technologies have significant potential to transform individual lives and even our society,” NIST Director Laurie Locascio said at the framework’s launch event. “They can bring positive changes to our commerce and our health, our transportation and our cybersecurity. The AI RMF will help numerous organizations that have developed and committed to AI principles to convert those principles into practice.”
The new guidance was met with broad bipartisan support, with Rep. Zoe Lofgren, D-Calif., and Rep. Frank Lucas, R-Okla., both sending laudatory messages for the launch event.
“The framework can maximize the benefits and reduce the likelihood of any degree of harm that these technologies may bring,” Lofgren said.
The guidance mentions several criteria for organizations to consider when determining the trustworthiness of AI algorithms, including reliability, safety, security, and protecting privacy and mitigating harmful bias.
NIST also released a companion voluntary AI RMF Playbook, which suggests ways to navigate and use the framework. The agency is also looking for feedback from organizations that adopt the guidelines. Suggestions may be sent to AIFramework@nist.gov.