The following is part of our annual publication Selected Issues for Boards of Directors in 2026. Explore all topics or download the PDF.


Over the last nine years, during which I served as Cleary Gottlieb’s Managing Partner, there have been significant, often unexpected, changes in global politics, global markets and the legal industry.

This is the final part of our four-part series on the EUIPO study on GenAI and copyright. Read parts 12, and 3.

The EUIPO study provides detailed insights into the evolving relationship between GenAI and copyright law, highlighting both the complex challenges and emerging solutions in this rapidly developing field. As discussed in the previous parts of this series, the study addresses crucial issues at both the training (input) and deployment (output) stages of GenAI systems.

This is the third part of our four-part series on the EUIPO study on GenAI and copyright. Read parts 1, 2, and 4.

This third part of the four-part series offers four key takeaways on GenAI output, highlighting critical issues around retrieval augmented generation (RAG), transparency solutions, copyright retention concerns and emerging technical remedies.

This is the second part of our four-part series on the EUIPO study on GenAI and copyright. Read parts 1, 3, and 4.

In this second part of our four-part series exploring the EUIPO study on GenAI and copyright, we set out our key takeaways regarding GenAI inputs, including findings on the evolving interpretation of the legal text and data mining (TDM) rights reservation regime and existing opt-out measures.

Last week a Georgia state court granted summary judgment in favor of OpenAI, ending a closely watched defamation lawsuit over false information—sometimes called “hallucinations”—generated by its generative AI product, ChatGPT.  The plaintiff, Mark Walters, is a nationally syndicated radio host and prominent gun rights advocate who sued OpenAI after ChatGPT produced output incorrectly stating that he had been accused of embezzlement in a lawsuit filed by the Second Amendment Foundation (“SAF”).  Walters is not, and never was, a party to that case. 

The following is part of our annual publication Selected Issues for Boards of Directors in 2025Explore all topics or download the PDF.


Deployment of generative AI expanded rapidly across many industries in 2024, leading to broadly increased productivity, return on investment and other benefits. At the same time, AI was also a focus for lawmakers, regulators and courts. There are currently 27 active generative AI litigation cases in the U.S., nearly all of which involve copyright claims. Numerous state legislatures have mulled AI regulation, and Colorado became the first and only state thus far to pass a law creating a broad set of obligations for certain developers and deployers of AI.

On July 26th, the National Institute of Standards and Technology (“NIST”) released its Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (the “Profile”),[1] laying out more than 200 suggested actions to mitigate the risks of generative artificial intelligence (“Gen AI”).  This Profile is a companion to NIST’s Artificial Intelligence Risk Management Framework (the “Framework”), which was released in January of 2023.[2]  The Framework aims to act as a resource for entities dealing with all manner of Gen AI systems to help them manage risks and promote trustworthy and responsible development of AI.  The Profile is intended to be an implementation of the Framework, providing concrete steps to manage AI risks.  

Late last month, the Department of Commerce’s National Institute of Standards and Technology (“NIST”) released four draft publications regarding actions taken by the agency following President Biden’s executive order on AI (the “Order”; see our prior alert here)[1] and call for action within six months of the Order.  Adding to NIST’s mounting portfolio of AI-related guidance, these publications reflect months of research focused on identifying risks associated with the use of artificial intelligence (“AI”) systems and promoting the central goal of the Order: improving the safety, security and trustworthiness of AI.  The four draft documents, further described below, are titled: