On July 26th, the National Institute of Standards and Technology (“NIST”) released its Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (the “Profile”),[1] laying out more than 200 suggested actions to mitigate the risks of generative artificial intelligence (“Gen AI”). This Profile is a companion to NIST’s Artificial Intelligence Risk Management Framework (the “Framework”), which was released in January of 2023.[2] The Framework aims to act as a resource for entities dealing with all manner of Gen AI systems to help them manage risks and promote trustworthy and responsible development of AI. The Profile is intended to be an implementation of the Framework, providing concrete steps to manage AI risks.
The Profile is part of the ever-growing body of guidance related to AI that has been created by NIST. On the same day the Profile was published, NIST also published final versions of a resource about secure development practices for GenAI and a plan aimed at worldwide engagement in crafting AI standards. These publications as well as the Profile are based on draft publications released earlier this year, which you can read more about in our post here. NIST also released initial drafts of guidelines on managing risk of AI foundation models that are used across a broad range of tasks.
Similar to NIST’s other publications and guidance concerning AI and Gen AI, the Profile is a voluntary set of suggested actions that entities can implement, not a required list of rules. Given the number of suggested actions, it is clearly intended to be modified and personalized to fit an entity’s particular use-cases for Gen AI, including whether an entity is developing or deploying Gen AI (or both). The Profile focuses on twelve risks that are unique to or exacerbated by Gen AI:
- Chemical, Biological, Radiological or Nuclear Information or Capabilities,
- Confabulation,
- Dangerous, Violent or Hateful Content,
- Data Privacy,
- Environmental Impacts,
- Harmful Bias or Homogenization,
- Human-AI Configuration,
- Information Integrity,
- Information Security,
- Intellectual Property,
- Obscene, Degrading and/or Abusive Content and
- Value Chain and Component Integration
These risks cover a broad spectrum of potential concerns, from the carbon footprint of training Gen AI systems to the risk of human over-reliance on Gen AI. Unlike other guidance about AI currently circulating at the state or federal level, the risks in the Profile are not limited to specific subject matters or industry areas. The Profile provides a short analysis of each of the risks, but the bulk of the report is devoted to suggested actions to manage the risks of Gen AI.
The Framework outlines four key functions for dealing with AI risks: governing, mapping, measuring and managing. There are more than 200 suggested actions included in the Profile related to these functions; this post does not attempt to outline them, but makes some general observations about the suggested actions.
First, a significant number of the suggested actions relate to the internal governance and oversight of Gen AI, particularly through establishing internal policies and testing procedures for both development and deployment of Gen AI systems. As more companies seek to establish internal policies related to the development or use of AI, the Profile’s outline of policies and oversight could be a useful guideline through the drafting process.
Second, a number of suggested actions in the Profile also emphasize the need for risk evaluation and continued testing of Gen AI systems, whether developed internally or licensed from third parties. Risks about Gen AI are challenging to estimate and will evolve with the technology, and the Profile makes it clear that early diligence and continued assessment and adjustments based on risks will be necessary for the responsible use of Gen AI.
Third, the Profile encourages entities to solicit internal and external feedback at multiple stages of the AI development and use process, and to establish methods for implementing this feedback in Gen AI systems. This emphasis on feedback relates to the risk of harmful bias or homogenization, as the Profile prioritizes seeking diverse viewpoints and considering the potential individual and societal impacts related to Gen AI.
The NIST Framework and Profile are voluntary guidelines, though some companies have already signed on to the voluntary commitments that are part of the Biden Administration’s Executive Order on AI, which include the NIST Framework as an example of a mechanism through which companies can develop, advance and adopt shared standards for AI safety. As NIST drafts and creates ever more guidance on AI, it will be informative to see how entities begin to implement NIST’s guidance and whether NIST becomes the “gold standard” for managing risk.
[1] The Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile is available here.