The following is part of our annual publication Selected Issues for Boards of Directors in 2026. Explore all topics or download the PDF.


Overview of AI Copyright Litigation

In 2026, we can expect important developments in the legal landscape of generative AI and copyright. Dozens of copyright infringement lawsuits targeting the training and development of AI models—capable of generating text, images, video, music and more—are advancing toward dispositive rulings. The central issue remains whether training AI models using unlicensed copyrighted works is infringing or instead constitutes fair use under Section 107 of the U.S. Copyright Act. Courts consider four factors in determining whether a particular use is fair: (1) the purpose and character of the use, (2) the nature of the copyrighted work, (3) the amount and substantiality of the portion used and (4) the effect of the use upon the potential market for or value of the copyrighted work. The thrust of this inquiry is whether the use is transformative—serving a different purpose or function from the original work—or merely usurps the market for the original by reproducing its protected expression. As courts establish legal frameworks for AI training and protection of AI-generated outputs, companies and boards should closely monitor developments to fully understand the risks and opportunities of AI implementation.

On November 4, 2025, the UK High Court handed down judgment in Getty Images v. Stability AI,[1] a case emphasized for its significance to content creators and the AI industry and “the balance to be struck between the two warring factions”.[2] Despite significant public interest in the lawsuit, the issues that remained before the court on the “diminished”[3] case were limited (after Getty abandoned its primary infringement claims during trial). The judgment dismisses Getty’s remaining claims of secondary copyright infringement. While some claims of trademark infringement asserted by Getty were upheld, Justice Joanna Smith DBE acknowledged the findings were “extremely limited in scope”.[4]

On September 29, 2025, Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act (TFAIA, SB 53 or the Act)[1], establishing a comprehensive framework for transparency, safety and accountability in the development and deployment of the most advanced artificial intelligence models. Building upon existing California laws targeting AI such as AB 2013[2], the Act, which takes effect January 1, 2026 and imposes penalties up to $1 million per violation, creates immediate compliance obligations for AI developers of the most powerful frontier models.

Last week a Georgia state court granted summary judgment in favor of OpenAI, ending a closely watched defamation lawsuit over false information—sometimes called “hallucinations”—generated by its generative AI product, ChatGPT.  The plaintiff, Mark Walters, is a nationally syndicated radio host and prominent gun rights advocate who sued OpenAI after ChatGPT produced output incorrectly stating that he had been accused of embezzlement in a lawsuit filed by the Second Amendment Foundation (“SAF”).  Walters is not, and never was, a party to that case. 

The following is part of our annual publication Selected Issues for Boards of Directors in 2025Explore all topics or download the PDF.


Deployment of generative AI expanded rapidly across many industries in 2024, leading to broadly increased productivity, return on investment and other benefits. At the same time, AI was also a focus for lawmakers, regulators and courts. There are currently 27 active generative AI litigation cases in the U.S., nearly all of which involve copyright claims. Numerous state legislatures have mulled AI regulation, and Colorado became the first and only state thus far to pass a law creating a broad set of obligations for certain developers and deployers of AI.

On 5 September  2024, the EU, UK and US joined seven other states[1] in signing the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (“Treaty”) – the first international treaty governing the safe use of artificial intelligence (‘‘AI’’).[2] The Treaty remains subject to ratification, acceptance or approval by each signatory and will enter into force on the first day of the month following a period of three months after the date on which five signatories, including at least three Council of Europe member states, have ratified it. Any state worldwide is eligible to join the Treaty, subject to the unanimous approval of the signatories, and to commit to complying with its provisions. The Treaty is expected to have a positive impact on international cooperation on addressing AI-related risks.

As inventors, attorneys and patent examiners grapple with the impacts of AI on patents, the United States Patent and Trademark Office (the “USPTO”) has released guidance concerning the subject matter patent eligibility of inventions that relate to AI technology.[1]  The impetus for this guidance was President Biden’s Executive Order on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence, which directed the USPTO to issue guidance to patent examiners and applicants regarding patent subject matter eligibility in order to address innovation in AI and other critical and emerging technologies.  

This week, a federal court in Tennessee transferred to California a lawsuit brought by several large music publishers against a California-based AI company, Anthropic PBC. Plaintiffs in Concord Music Group et al. v. Anthropic PBC[1] allege that Anthropic infringed the music publishers’ copyrights by improperly using copyrighted song lyrics to train Claude, its generative AI model.  The music publishers asserted not only direct copyright infringement based on this training, but also contributory and vicarious infringement based on user-prompted outputs and violation of Section 1202(b) of the Digital Millennium Copyright Act for allegedly removing plaintiffs’ copyright management information from copies of the lyrics.  On November 16, 2023, the music publishers also filed a motion for a preliminary injunction that would require Anthropic to implement effective “guardrails” in its Claude AI models to prevent outputs that infringe plaintiffs’ copyrighted lyrics and preclude Anthropic from creating or using unauthorized copies of those lyrics to train future AI models. 

Last week, in Vidal v. Elster, the Supreme Court upheld the Lanham Act’s prohibition against registering a trademark that includes a living person’s name without their consent.[1]  This case is the latest in a trilogy of challenges to the constitutionality of trademark registration bars in the Lanham Act.  The Court previously struck down as unconstitutional the clauses in Section 2(c) prohibiting registration of marks constituting “disparagement” and “immoral or scandalous matter.”[2]  In a departure from those decisions, the Court upheld the U.S. Patent and Trademark Office’s refusal to register a trademark for “Trump Too Small”—a piece of political commentary that the applicant sought to use on apparel to criticize a government official.  The Court reasoned that, unlike the other provisions, the “names” prohibition is viewpoint-neutral, and thus does not violate any First Amendment right.