The following is part of our annual publication Selected Issues for Boards of Directors in 2026. Explore all topics or download the PDF.


Over the last nine years, during which I served as Cleary Gottlieb’s Managing Partner, there have been significant, often unexpected, changes in global politics, global markets and the legal industry.

The following is part of our annual publication Selected Issues for Boards of Directors in 2026. Explore all topics or download the PDF.


Overview of AI Copyright Litigation

In 2026, we can expect important developments in the legal landscape of generative AI and copyright. Dozens of copyright infringement lawsuits targeting the training and development of AI models—capable of generating text, images, video, music and more—are advancing toward dispositive rulings. The central issue remains whether training AI models using unlicensed copyrighted works is infringing or instead constitutes fair use under Section 107 of the U.S. Copyright Act. Courts consider four factors in determining whether a particular use is fair: (1) the purpose and character of the use, (2) the nature of the copyrighted work, (3) the amount and substantiality of the portion used and (4) the effect of the use upon the potential market for or value of the copyrighted work. The thrust of this inquiry is whether the use is transformative—serving a different purpose or function from the original work—or merely usurps the market for the original by reproducing its protected expression. As courts establish legal frameworks for AI training and protection of AI-generated outputs, companies and boards should closely monitor developments to fully understand the risks and opportunities of AI implementation.

The following is part of our annual publication Selected Issues for Boards of Directors in 2026. Explore all topics or download the PDF.


AI adoption is now mainstream: 88% of businesses use AI in at least one function, with global spending expected to exceed $1.5 trillion in 2025 and approach $2 trillion in 2026. As organizations race to scale AI, many have relied upon traditional vendor risk management policies to vet third-party AI vendors and tools; however, implementation of third-party AI tools presents distinctive risks that require tailored due diligence, auditing, contracting and governance. Because businesses are accountable for outputs generated by third-party AI tools and for vendors’ processing of prompts and other business data, boards and management should ensure legal, IT and procurement teams apply a principled, risk-based approach to vendor management that addresses AI‑specific considerations.

As states continue to grapple with establishing regulatory frameworks for the most powerful artificial intelligence (“AI”) systems, New York has joined California in targeting frontier AI models with the Responsible AI Safety and Education Act (the “RAISE Act” or the “Act”).[1] Signed into law on December 19, 2025 by Governor Hochul, the Act creates a comprehensive regulatory framework for developers of the most advanced AI systems, marking New York’s entry into the vanguard of state AI safety regulation.

This article was authored by Daniel Ilan, Rahul Mukhi, Prudence Buckland, and Melissa Faragasso from Cleary Gottlieb, and Brian Lichter and Elijah Seymour from Stroz Friedberg, a LevelBlue company.

Recent disclosures by Anthropic and OpenAI highlight a pivotal shift in the cyber threat landscape: AI is no longer merely a tool that aids attackers, in some cases, it has become the attacker itself. Together, these incidents illustrate immediate implications for corporate governance, contracting and security programs as companies integrate AI with their business systems. Below, we explain how these attacks were orchestrated and what steps businesses should consider given the rising cyber risks associated with the adoption of AI.

On November 4, 2025, the UK High Court handed down judgment in Getty Images v. Stability AI,[1] a case emphasized for its significance to content creators and the AI industry and “the balance to be struck between the two warring factions”.[2] Despite significant public interest in the lawsuit, the issues that remained before the court on the “diminished”[3] case were limited (after Getty abandoned its primary infringement claims during trial). The judgment dismisses Getty’s remaining claims of secondary copyright infringement. While some claims of trademark infringement asserted by Getty were upheld, Justice Joanna Smith DBE acknowledged the findings were “extremely limited in scope”.[4]

On September 29, 2025, Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act (TFAIA, SB 53 or the Act)[1], establishing a comprehensive framework for transparency, safety and accountability in the development and deployment of the most advanced artificial intelligence models. Building upon existing California laws targeting AI such as AB 2013[2], the Act, which takes effect January 1, 2026 and imposes penalties up to $1 million per violation, creates immediate compliance obligations for AI developers of the most powerful frontier models.

The EU AI Act’s phased implementation continues: from 2 August 2025, the AI Act’s rules on general-purpose AI (GPAI) will enter into force (and become enforceable in respect of any new GPAI models in August 2026 and for existing GPAI models in August 2027). This milestone follows the recent publication of (non-binding) guidelines[1] developed by the European Commission to clarify the scope of the obligations on providers of GPAI models (the “GPAI Guidelines”), and the release of the final version of the General-Purpose AI Code of Practice.[2]

This is the final part of our four-part series on the EUIPO study on GenAI and copyright. Read parts 12, and 3.

The EUIPO study provides detailed insights into the evolving relationship between GenAI and copyright law, highlighting both the complex challenges and emerging solutions in this rapidly developing field. As discussed in the previous parts of this series, the study addresses crucial issues at both the training (input) and deployment (output) stages of GenAI systems.

This is the third part of our four-part series on the EUIPO study on GenAI and copyright. Read parts 1, 2, and 4.

This third part of the four-part series offers four key takeaways on GenAI output, highlighting critical issues around retrieval augmented generation (RAG), transparency solutions, copyright retention concerns and emerging technical remedies.