On July 26th, the National Institute of Standards and Technology (“NIST”) released its Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (the “Profile”),[1] laying out more than 200 suggested actions to mitigate the risks of generative artificial intelligence (“Gen AI”). This Profile is a companion to NIST’s Artificial Intelligence Risk Management Framework (the “Framework”), which was released in January of 2023.[2] The Framework aims to act as a resource for entities dealing with all manner of Gen AI systems to help them manage risks and promote trustworthy and responsible development of AI. The Profile is intended to be an implementation of the Framework, providing concrete steps to manage AI risks.
Generative AI
NIST Delivers Draft Standards on AI and Launches GenAI Evaluation Program in Furtherance of President Biden’s Executive Order on AI
Late last month, the Department of Commerce’s National Institute of Standards and Technology (“NIST”) released four draft publications regarding actions taken by the agency following President Biden’s executive order on AI (the “Order”; see our prior alert here)[1] and call for action within six months of the Order. Adding to NIST’s mounting portfolio of AI-related guidance, these publications reflect months of research focused on identifying risks associated with the use of artificial intelligence (“AI”) systems and promoting the central goal of the Order: improving the safety, security and trustworthiness of AI. The four draft documents, further described below, are titled:
Court Dismisses Most Claims in Authors’ Lawsuit Against OpenAI
This week saw yet another California federal court dismiss copyright and related claims arising out of the training and output of a generative AI model in Tremblay v. OpenAI, Inc.,[1]a putative class action filed on behalf of a group of authors alleging that OpenAI infringed their copyrighted literary works by using them to train ChatGPT.[2] OpenAI moved to dismiss all claims against it, save the claim for direct copyright infringement, and the court largely sided with OpenAI.
Training AI models on Synthetic Data: No silver bullet for IP infringement risk in the context of training AI systems (Part 4 of 4)
The UK ICO launches consultation series on GenAI
On 15 January 2024, the UK Information Commissioner’s Office (“ICO”)[1] launched a series of public consultations on the applicability of data protection laws to the development and use of generative artificial intelligence (“GenAI”). The ICO is seeking comments from “all stakeholders with an interest in GenAI”, including developers, users, legal advisors and consultants.[2]
Training AI models on Synthetic Data: No silver bullet for IP infringement risk in the context of training AI systems (Part 3 of 4)
This third part of our four-part series on using synthetic data to train AI models explores the interplay between synthetic data training sets, the EU Copyright Directive and the forthcoming EU AI Act.
Training AI models on Synthetic Data: No silver bullet for IP infringement risk in the context of training AI systems (Part 2 of 4)
This second part of our four-part series on using synthetic data to train AI models explores how the use of synthetic data training sets may mitigate copyright infringement risks under EU law.
Training AI models on Synthetic Data: No silver bullet for IP infringement risk in the context of training AI systems (Part 1 of 4)
This is the first part of series on using synthetic data to train AI models. See here for Parts 2, 3, and 4.
The recent rapid advancements of Artificial Intelligence (“AI”) have revolutionized creation and learning patterns. Generative AI (“GenAI”) systems have unveiled unprecedented capabilities, pushing the boundaries of what we thought possible. Yet, beneath the surface of the transformative potential of AI lies a complex legal web of intellectual property (“IP”) risks, particularly concerning the use of “real-world” training data, which may lead to alleged infringement of third-party IP rights if AI training data is not appropriately sourced.
White House Unveils Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
On October 30, 2023, the Biden Administration issued a landmark Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the “Order”), directing the establishment of new standards for artificial intelligence (“AI”) safety and security and laying the foundation to ensure the protection of Americans’ privacy and civil rights, support for American workers, promotion of responsible innovation, competition and collaboration, while advancing America’s role as a world leader with respect to AI.
Copyright Office Considers New DMCA Carveout for AI Anti-Bias Research
On October 19, 2023, the U.S. Copyright Office announced in the Federal Register that it will consider a proposed exemption to the Digital Millennium Copyright Act’s (“DMCA”) anti-circumvention provisions which prohibit the circumvention of any technological measures used to prevent unauthorized access to copyrighted works. The exemption would allow those researching bias in artificial intelligence (“AI”) to bypass any technological measures that limit the use of copyrighted generative AI models.