This week, a federal court in Tennessee transferred to California a lawsuit brought by several large music publishers against a California-based AI company, Anthropic PBC. Plaintiffs in Concord Music Group et al. v. Anthropic PBC[1] allege that Anthropic infringed the music publishers’ copyrights by improperly using copyrighted song lyrics to train Claude, its generative AI model.  The music publishers asserted not only direct copyright infringement based on this training, but also contributory and vicarious infringement based on user-prompted outputs and violation of Section 1202(b) of the Digital Millennium Copyright Act for allegedly removing plaintiffs’ copyright management information from copies of the lyrics.  On November 16, 2023, the music publishers also filed a motion for a preliminary injunction that would require Anthropic to implement effective “guardrails” in its Claude AI models to prevent outputs that infringe plaintiffs’ copyrighted lyrics and preclude Anthropic from creating or using unauthorized copies of those lyrics to train future AI models. 

Last week, in Vidal v. Elster, the Supreme Court upheld the Lanham Act’s prohibition against registering a trademark that includes a living person’s name without their consent.[1]  This case is the latest in a trilogy of challenges to the constitutionality of trademark registration bars in the Lanham Act.  The Court previously struck down as unconstitutional the clauses in Section 2(c) prohibiting registration of marks constituting “disparagement” and “immoral or scandalous matter.”[2]  In a departure from those decisions, the Court upheld the U.S. Patent and Trademark Office’s refusal to register a trademark for “Trump Too Small”—a piece of political commentary that the applicant sought to use on apparel to criticize a government official.  The Court reasoned that, unlike the other provisions, the “names” prohibition is viewpoint-neutral, and thus does not violate any First Amendment right. 

In a recent en banc decision concerning the standard for assessing obviousness challenges to design patents, the United States Court of Appeals for the Federal Circuit discarded its long-standing standard, known as the Rosen-Durling test and regarded by many as overly-rigid, and held that the standard for design patents should be the same as for utility patents.  The decision in LKQ Corporation v. GM Global Technology Operations LLC[1] will have significant implications for design patent applicants and owners going forward.

Late last month, the Department of Commerce’s National Institute of Standards and Technology (“NIST”) released four draft publications regarding actions taken by the agency following President Biden’s executive order on AI (the “Order”; see our prior alert here)[1] and call for action within six months of the Order.  Adding to NIST’s mounting portfolio of AI-related guidance, these publications reflect months of research focused on identifying risks associated with the use of artificial intelligence (“AI”) systems and promoting the central goal of the Order: improving the safety, security and trustworthiness of AI.  The four draft documents, further described below, are titled:

The United States Patent and Trademark Office (“USPTO”) issued guidance on February 13, 2024 (the “Guidance”) regarding the patentability of inventions created or developed with the assistance of artificial intelligence (“AI”), a novel issue on which the USPTO has been seeking input from various public and private stakeholders over the past few years.  President Biden mandated the issuance of such Guidance in his executive order on AI (see our prior alert here)[1] in October 2023.  The Guidance aims to clarify how patent applications involving AI-assisted inventions will be examined by patent examiners, and reaffirms the existing jurisprudence maintaining that only natural persons, not AI tools, can be listed as inventors.  However, the Guidance clarifies that AI-assisted inventions are not automatically ineligible for patent protection so long as one or more natural persons “significantly contributed” to the invention.  Overall, the Guidance underscores the need for a balanced approach to inventorship that acknowledges both technological advancements and human innovation.  The USPTO is seeking public feedback on the Guidance, which is due by May 13, 2024.

In Punchbowl, Inc. v. AJ Press, Inc., the Ninth Circuit revived a trademark infringement case previously dismissed on grounds that the First Amendment shields “expressive” trademarks from Lanham Act liability unless plaintiff can show the mark (1) has no artistic relevance to the underlying work, or (2) explicitly misleads as to its source.[1]  This is known as the Rogers test, and effectively operates as a shield to trademark liability where it applies.  Last year, the Supreme Court limited application of the Rogers test in Jack Daniel’s Properties, Inc. v. VIP Products LLC, [2] holding that it does not apply where the challenged use of a trademark is to identify the source of the defendant’s goods or services.  In those instances, a traditional likelihood of confusion or dilution analysis is required. 

On November 28, 2023, U.S. District Judge Fred W. Slaughter for the Central District of California granted motions for summary judgment against a screenwriter’s claims that the creation of Ad Astra, the 2019 Brad Pitt film, had infringed on a script he had written.[1]  The Court reasoned that the defendant companies could not have possibly copied the script in question, as they did not have access to the script until after Ad Astra was written.  Additionally, the court stated, the two films were significantly different so as to conclude there was no infringement, even if it could be shown there was access.  

In an opinion issued December 4, 2023, the U.S. Court of Appeals for the Federal Circuit[1] reversed a lower court’s denial of Intel Corporation’s (“Intel’s”) motion for leave to amend its answer to assert a new license defense in a patent infringement suit brought by VLSI Technology LLC (“VLSI”).  The decision paves the way for Intel to make the case that it received a license to VLSI’s patents when a company that Intel had an existing license with became affiliated with VLSI due to its acquisition by an investment management firm.

On October 30, 2023, the Biden Administration issued a landmark Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the “Order”), directing the establishment of new standards for artificial intelligence (“AI”) safety and security and laying the foundation to ensure the protection of Americans’ privacy and civil rights, support for American workers, promotion of responsible innovation, competition and collaboration, while advancing America’s role as a world leader with respect to AI.

By Angela Dunning and Lindsay Harris.[1]  Note, Cleary Gottlieb represents Midjourney in this matter.

On October 30, 2023, U.S. District Judge William Orrick of the Northern District of California issued an Order[2] largely dismissing without prejudice the claims brought by artists Sarah Andersen, Kelly McKernan and Karla Ortiz in a proposed class action lawsuit against artificial intelligence (“AI”) companies Stability AI, Inc., Stability AI Ltd. (together, “Stability AI”), DeviantArt, Inc. (“DeviantArt”) and Midjourney, Inc. (“Midjourney”).  Andersen is the first of many cases brought by high-profile artists, programmers and authors (including John Grisham, Sarah Silverman and Michael Chabon) seeking to challenge the legality of using copyrighted material for training AI models.