AI RMF (AI Risk Management Framework)
Browse all Regulation terms
The NIST AI Risk Management Framework is a voluntary framework published by the U.S. National Institute of Standards and Technology in January 2023 that provides a structured approach for organizations to identify, assess, and manage risks from artificial intelligence systems throughout their lifecycle.
The framework organizes AI risk management into four core functions: Govern (establish organizational policies and accountability), Map (identify AI system context and risks), Measure (assess and benchmark risks), and Manage (implement controls to mitigate risks). NIST extended the framework with a Generative AI Profile in July 2024, addressing 12 specific risk categories including confabulation, data privacy breaches, information security vulnerabilities, dangerous or violent content generation, and value chain integration challenges.
AI RMF applies directly to financial institutions deploying AI in payments, credit decisions, fraud detection, and autonomous agent systems. While voluntary in the U.S., the framework influences regulatory expectations and industry standards globally, with financial regulators increasingly expecting institutions to demonstrate systematic AI risk management aligned with frameworks like NIST AI RMF and Singapore's FEAT Principles when seeking approval for AI-driven financial services.