As inventors, attorneys and patent examiners grapple with the impacts of AI on patents, the United States Patent and Trademark Office (the “USPTO”) has released guidance concerning the subject matter patent eligibility of inventions that relate to AI technology.[1]  The impetus for this guidance was President Biden’s Executive Order on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence, which directed the USPTO to issue guidance to patent examiners and applicants regarding patent subject matter eligibility in order to address innovation in AI and other critical and emerging technologies.  

Last week, in Vidal v. Elster, the Supreme Court upheld the Lanham Act’s prohibition against registering a trademark that includes a living person’s name without their consent.[1]  This case is the latest in a trilogy of challenges to the constitutionality of trademark registration bars in the Lanham Act.  The Court previously struck down as unconstitutional the clauses in Section 2(c) prohibiting registration of marks constituting “disparagement” and “immoral or scandalous matter.”[2]  In a departure from those decisions, the Court upheld the U.S. Patent and Trademark Office’s refusal to register a trademark for “Trump Too Small”—a piece of political commentary that the applicant sought to use on apparel to criticize a government official.  The Court reasoned that, unlike the other provisions, the “names” prohibition is viewpoint-neutral, and thus does not violate any First Amendment right. 

There has been a push at the state and federal level to regulate AI-generated deepfakes that use the voices and likenesses of real people without their approval.  This legislative momentum stems from a series of high profile incidents involving deepfakes that garnered public attention and concern.  Last year, an AI-generated song entitled “Heart on My Sleeve” simulated the voices of recording artists Drake and The Weeknd.  The song briefly went viral before being pulled from streaming services following objections from the artists’ music label.  Another incident involved an advertisement for dental services that used an AI-generated Tom Hanks to make the sales pitch.  As AI becomes more sophisticated and accessible to the general public, it has raised concerns over the misappropriation of people’s personas.  In recent months, several states have introduced legislation targeting the use of deepfakes to spread election-related misinformation.  At the federal level, both the House and Senate are considering a federal right of publicity that would give individuals a private right of action.  At the state level, Tennessee has become the first state update its right of publicity laws targeted towards the music industry, signing the Ensuring Likeness Voice and Image Security (the “ELVIS Act”) into law on March 21, 2024, which takes effect July 1, 2024.

The following post was originally included as part of our recently published memorandum “Selected Issues for Boards of Directors in 2024”.

Artificial Intelligence (AI), and in particular, generative AI, will continue to be an issue in the year to come, as new laws and regulations, agency guidance, continuing and additional litigation on AI and new AI-related partnerships will prompt headlines and require companies to continually think about these issues.

This third part of our four-part series on using synthetic data to train AI models explores the interplay between synthetic data training sets, the EU Copyright Directive and the forthcoming EU AI Act.

By Angela Dunning and Lindsay Harris.[1]  Note, Cleary Gottlieb represents Midjourney in this matter.

On October 30, 2023, U.S. District Judge William Orrick of the Northern District of California issued an Order[2] largely dismissing without prejudice the claims brought by artists Sarah Andersen, Kelly McKernan and Karla Ortiz in a proposed class action lawsuit against artificial intelligence (“AI”) companies Stability AI, Inc., Stability AI Ltd. (together, “Stability AI”), DeviantArt, Inc. (“DeviantArt”) and Midjourney, Inc. (“Midjourney”).  Andersen is the first of many cases brought by high-profile artists, programmers and authors (including John Grisham, Sarah Silverman and Michael Chabon) seeking to challenge the legality of using copyrighted material for training AI models.

On October 30, 2023, the G7 Leaders published a Statement on the Hiroshima Artificial Intelligence (“AI”) Process (the “Statement”).[1] This follows the G7 Summit in May, where the leaders agreed on the need to address the risks arising from rapidly evolving AI technologies. The Statement was accompanied by the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems (the “Code of Conduct”)[2] and the Hiroshima Process International Guiding Principles for Advanced AI Systems (the “Guiding Principles”).[3]

The U.S. District Court for the District of Columbia recently affirmed a decision by the U.S. Copyright Office (“USCO”) in which the USCO denied an application to register a work authored entirely by an artificial intelligence program.  The case, Thaler v. Perlmutter, challenging U.S. copyright law’s human authorship requirement, is the first of its kind in the United States, but will definitely not be the last, as questions regarding the originality and protectability of generative AI (“GenAI”) created works continue to arise.  The court in Thaler focused on the fact that the work at issue had no human authorship, setting a clear rule for one end of the spectrum.  As the court recognized, the more difficult questions that will need to be addressed include how much human input is required to qualify the user as the creator of a work such that it is eligible for copyright protection.

On June 6, 2023, New York Senate Bill S5640 / Assembly Bill A5295 (“S5640”) won near-unanimous final passage in the New York Assembly with a 147-1 vote, after being passed unanimously by the Senate the previous week.  If signed into law by Governor Hochul, the legislation would, effective immediately, add to New York labor law a new section 203-f that renders unenforceable provisions in employee agreements that require employees to assign certain inventions developed using the employee’s own property and time. 

GitHub, acquired by Microsoft in 2018, is an online repository used by software developers for storing and sharing software projects.  In collaboration with OpenAI, GitHub released an artificial intelligence-based offering in 2021 called Copilot, which is powered by OpenAI’s generative AI model, Codex.  Together, these tools assist software developers by taking natural language prompts describing a desired functionality and suggesting blocks of code to achieve that functionality.  OpenAI states on its website that, Codex was trained on “billions of lines of source code from publicly available sources, including code in public GitHub repositories.”