Last week the Fourth Circuit reversed a $1 billion copyright verdict against an internet service provider and ordered a new trial on damages allegedly arising from illegal music downloads by its subscribers.  In Sony Music Entertainment et al. v. Cox Communications Inc. et al.,[1] a group of music producers belonging to the Recording Industry Association of America brought suit against Cox for contributory and vicarious copyright infringement based on allegations that Cox induced and encouraged rampant infringement on its service.  In 2019, a jury found Cox liable on both theories for infringement of 10,017 copyrighted works and awarded $99,830.29 per work, for a total of $1 billion in statutory damages.  On appeal, the Fourth Circuit issued a mixed ruling – upholding the finding of contributory infringement but reversing the vicarious liability verdict and remanding for a new trial on damages. 

The rapid development of AI is introducing new opportunities and challenges to dispute resolution. AI is already impacting the document review and production process, legal research, and the drafting of court submissions. It is expected that the use of AI will expand into other areas, including predicting case outcomes and adjudicating disputes. However, the use of AI in litigation also bears risk, as highlighted by a recent First-tier Tribunal (Tax) decision, where an appellant had sought to rely on precedent authorities that, in fact, were fabricated by AI (a known risk with AI using large language models, referred to as hallucination).[1] While, in this particular case, no further consequences seemed to follow (in light of the fact that the appellant, a litigant in person, “had been unaware that the AI cases were not genuine and that she did not know how to check their validity[2]), the Tribunal did highlight that “providing authorities which are not genuine and asking a court or tribunal to rely on them is a serious and important issue”,[3] suggesting that litigants may incur certain risks by relying on authorities suggested by AI, unless these are independently verified. On 12 December 2023, a group of senior judges, including the Master of the Rolls and the Lady Chief Justice, issued guidance on AI for judicial office holders, which, amongst other things, discourages the use of AI for legal research and analysis and highlights the risk of AI being relied on by litigants to provide legal advice and/or to produce evidence.[4]

Quantum technology is seen as having the potential to revolutionize many aspects of technology, the economy and society, including the financial sector. At the same time, this technology represents a significant threat to cybersecurity, especially due to its potential to render most current encryption schemes obsolete.

This second part of our four-part series on using synthetic data to train AI models explores how the use of synthetic data training sets may mitigate copyright infringement risks under EU law.

This is the first part of series on using synthetic data to train AI models. See here for Parts 23, and 4.

The recent rapid advancements of Artificial Intelligence (“AI”) have revolutionized creation and learning patterns. Generative AI (“GenAI”) systems have unveiled unprecedented capabilities, pushing the boundaries of what we thought possible. Yet, beneath the surface of the transformative potential of AI lies a complex legal web of intellectual property (“IP”) risks, particularly concerning the use of “real-world” training data, which may lead to alleged infringement of third-party IP rights if AI training data is not appropriately sourced.

On October 19, 2023, the U.S. Copyright Office announced in the Federal Register that it will consider a proposed exemption to the Digital Millennium Copyright Act’s (“DMCA”) anti-circumvention provisions which prohibit the circumvention of any technological measures used to prevent unauthorized access to copyrighted works.  The exemption would allow those researching bias in artificial intelligence (“AI”) to bypass any technological measures that limit the use of copyrighted generative AI models.

On September 6, 2023, California Governor Gavin Newsom signed Executive Order N-12-23 (the “Executive Order”) relating to the use of generative artificial intelligence (“GenAI”) by the State, as well as preparation of certain reports assessing the equitable use of GenAI in the public sector.  The Executive Order instructs State agencies to look into the potential risks inherent with the use of GenAI and creates a blueprint for public sector implementation of GenAI tools in the near future. The Executive Order indicates that California is anticipating expanding the role that GenAI tools play in aiding State agencies to achieve their missions, while simultaneously ensuring that these State agencies identify and study any negative effects that the implementation of GenAI tools might have on residents of the State.  The Executive Order covers a number of areas, including:

The U.S. District Court for the District of Columbia recently affirmed a decision by the U.S. Copyright Office (“USCO”) in which the USCO denied an application to register a work authored entirely by an artificial intelligence program.  The case, Thaler v. Perlmutter, challenging U.S. copyright law’s human authorship requirement, is the first of its kind in the United States, but will definitely not be the last, as questions regarding the originality and protectability of generative AI (“GenAI”) created works continue to arise.  The court in Thaler focused on the fact that the work at issue had no human authorship, setting a clear rule for one end of the spectrum.  As the court recognized, the more difficult questions that will need to be addressed include how much human input is required to qualify the user as the creator of a work such that it is eligible for copyright protection.