NIST Releases Artificial Intelligence Risk Management Framework (AI RMF)
On January 26, 2023, the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) unveiled the Artificial Intelligence Risk Management Framework (AI RMF 1.0). This guidance, crafted for organizations involved in the design, development, deployment, or utilization of AI systems, aims to address the multifaceted risks associated with AI technologies.
Developed in collaboration with both the private and public sectors, the AI RMF is a response to a directive from Congress. It’s designed to evolve alongside the AI landscape, ensuring that as technologies progress, the framework remains relevant. Its overarching goal is to ensure that society can harness the advantages of AI technologies while being safeguarded from potential detriments.
Deputy Commerce Secretary Don Graves emphasized the framework’s voluntary nature, stating its potential to bolster AI trustworthiness in alignment with democratic values. He believes it can foster AI innovation and growth without compromising civil rights, liberties, and equity.
AI, unlike traditional software, presents unique challenges. Its data-driven nature means it can evolve unpredictably, and its outcomes can be influenced by societal dynamics, making it “socio-technical.” These complexities can manifest in various scenarios, from online interactions to critical decisions in job or loan applications.
NIST’s framework encourages a shift in organizational culture, prompting entities to reevaluate their approach to AI. It champions a holistic perspective on AI risks, emphasizing the importance of understanding, communicating, measuring, and monitoring both its potential benefits and drawbacks.
Under Secretary for Standards and Technology and NIST Director, Laurie E. Locascio, highlighted the framework’s universal applicability, stating it can benefit organizations across sectors and sizes. She envisions the AI RMF as a catalyst for the development of best practices and standards in AI.
The AI RMF is structured into two segments. The initial segment offers a perspective on AI-related risks and the traits of trustworthy AI systems. The latter, which forms the framework’s core, introduces four distinct functions: govern, map, measure, and manage. These functions are designed to be adaptable, catering to specific use cases and various stages of the AI lifecycle.
After an 18-month development phase, during which NIST collaborated closely with the private and public sectors and considered feedback from over 240 organizations, the AI RMF was finalized. Alongside the framework, NIST also introduced a voluntary AI RMF Playbook, offering guidance on navigating and implementing the framework.
NIST has expressed its intention to periodically update the framework, inviting feedback from the AI community.
For 30+ years, I've been committed to protecting people, businesses, and the environment from the physical harm caused by cyber-kinetic threats, blending cybersecurity strategies and resilience and safety measures. Lately, my worries have grown due to the rapid, complex advancements in Artificial Intelligence (AI). Having observed AI's progression for two decades and penned a book on its future, I see it as a unique and escalating threat, especially when applied to military systems, disinformation, or integrated into critical infrastructure like 5G networks or smart grids. More about me, and about Defence.AI.