Last week the Fourth Circuit reversed a $1 billion copyright verdict against an internet service provider and ordered a new trial on damages allegedly arising from illegal music downloads by its subscribers.  In Sony Music Entertainment et al. v. Cox Communications Inc. et al.,[1] a group of music producers belonging to the Recording Industry Association of America brought suit against Cox for contributory and vicarious copyright infringement based on allegations that Cox induced and encouraged rampant infringement on its service.  In 2019, a jury found Cox liable on both theories for infringement of 10,017 copyrighted works and awarded $99,830.29 per work, for a total of $1 billion in statutory damages.  On appeal, the Fourth Circuit issued a mixed ruling – upholding the finding of contributory infringement but reversing the vicarious liability verdict and remanding for a new trial on damages. 

This week saw yet another California federal court dismiss copyright and related claims arising out of the training and output of a generative AI model in Tremblay v. OpenAI, Inc.,[1]a putative class action filed on behalf of a group of authors alleging that OpenAI infringed their copyrighted literary works by using them to train ChatGPT.[2]  OpenAI moved to dismiss all claims against it, save the claim for direct copyright infringement, and the court largely sided with OpenAI. 

In Punchbowl, Inc. v. AJ Press, Inc., the Ninth Circuit revived a trademark infringement case previously dismissed on grounds that the First Amendment shields “expressive” trademarks from Lanham Act liability unless plaintiff can show the mark (1) has no artistic relevance to the underlying work, or (2) explicitly misleads as to its source.[1]  This is known as the Rogers test, and effectively operates as a shield to trademark liability where it applies.  Last year, the Supreme Court limited application of the Rogers test in Jack Daniel’s Properties, Inc. v. VIP Products LLC, [2] holding that it does not apply where the challenged use of a trademark is to identify the source of the defendant’s goods or services.  In those instances, a traditional likelihood of confusion or dilution analysis is required. 

This third part of our four-part series on using synthetic data to train AI models explores the interplay between synthetic data training sets, the EU Copyright Directive and the forthcoming EU AI Act.

This second part of our four-part series on using synthetic data to train AI models explores how the use of synthetic data training sets may mitigate copyright infringement risks under EU law.

This is the first part of series on using synthetic data to train AI models. See here for Parts 23, and 4.

The recent rapid advancements of Artificial Intelligence (“AI”) have revolutionized creation and learning patterns. Generative AI (“GenAI”) systems have unveiled unprecedented capabilities, pushing the boundaries of what we thought possible. Yet, beneath the surface of the transformative potential of AI lies a complex legal web of intellectual property (“IP”) risks, particularly concerning the use of “real-world” training data, which may lead to alleged infringement of third-party IP rights if AI training data is not appropriately sourced.

On October 30, 2023, the Biden Administration issued a landmark Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the “Order”), directing the establishment of new standards for artificial intelligence (“AI”) safety and security and laying the foundation to ensure the protection of Americans’ privacy and civil rights, support for American workers, promotion of responsible innovation, competition and collaboration, while advancing America’s role as a world leader with respect to AI.

By Angela Dunning and Lindsay Harris.[1]  Note, Cleary Gottlieb represents Midjourney in this matter.

On October 30, 2023, U.S. District Judge William Orrick of the Northern District of California issued an Order[2] largely dismissing without prejudice the claims brought by artists Sarah Andersen, Kelly McKernan and Karla Ortiz in a proposed class action lawsuit against artificial intelligence (“AI”) companies Stability AI, Inc., Stability AI Ltd. (together, “Stability AI”), DeviantArt, Inc. (“DeviantArt”) and Midjourney, Inc. (“Midjourney”).  Andersen is the first of many cases brought by high-profile artists, programmers and authors (including John Grisham, Sarah Silverman and Michael Chabon) seeking to challenge the legality of using copyrighted material for training AI models.