On 5 September  2024, the EU, UK and US joined seven other states[1] in signing the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (“Treaty”) – the first international treaty governing the safe use of artificial intelligence (‘‘AI’’).[2] The Treaty remains subject to ratification, acceptance or approval by each signatory and will enter into force on the first day of the month following a period of three months after the date on which five signatories, including at least three Council of Europe member states, have ratified it. Any state worldwide is eligible to join the Treaty, subject to the unanimous approval of the signatories, and to commit to complying with its provisions. The Treaty is expected to have a positive impact on international cooperation on addressing AI-related risks.

On July 26th, the National Institute of Standards and Technology (“NIST”) released its Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (the “Profile”),[1] laying out more than 200 suggested actions to mitigate the risks of generative artificial intelligence (“Gen AI”).  This Profile is a companion to NIST’s Artificial Intelligence Risk Management Framework (the “Framework”), which was released in January of 2023.[2]  The Framework aims to act as a resource for entities dealing with all manner of Gen AI systems to help them manage risks and promote trustworthy and responsible development of AI.  The Profile is intended to be an implementation of the Framework, providing concrete steps to manage AI risks.  

As inventors, attorneys and patent examiners grapple with the impacts of AI on patents, the United States Patent and Trademark Office (the “USPTO”) has released guidance concerning the subject matter patent eligibility of inventions that relate to AI technology.[1]  The impetus for this guidance was President Biden’s Executive Order on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence, which directed the USPTO to issue guidance to patent examiners and applicants regarding patent subject matter eligibility in order to address innovation in AI and other critical and emerging technologies.  

On 2 July, the French data protection supervisory authority – Commission Nationale de l’Informatique et des Libertés (CNIL) – launched a new public consultation on the development of AI systems. The public consultation is on (i) a new series of how-to sheets aimed at providing clarifications and recommendations with respect to seven issues related to the development of AI and data protection and (ii) a questionnaire on applying the GDPR to AI models trained with personal data. Below we set out a summary of the main takeaways.

1. Background: three years of legislative debate

Today, on July 12, 2024, EU Regulation No. 1689/2024 laying down harmonized rules on Artificial Intelligence (“Regulation” or “AI Act”) was finally published in the EU Official Journal and will enter into force on August 1, 2024.  This milestone is the culmination of three years of legislative debate since the EU Commission’s first proposal for a comprehensive EU regulation on AI in April 2021. [1]

This week, a federal court in Tennessee transferred to California a lawsuit brought by several large music publishers against a California-based AI company, Anthropic PBC. Plaintiffs in Concord Music Group et al. v. Anthropic PBC[1] allege that Anthropic infringed the music publishers’ copyrights by improperly using copyrighted song lyrics to train Claude, its generative AI model.  The music publishers asserted not only direct copyright infringement based on this training, but also contributory and vicarious infringement based on user-prompted outputs and violation of Section 1202(b) of the Digital Millennium Copyright Act for allegedly removing plaintiffs’ copyright management information from copies of the lyrics.  On November 16, 2023, the music publishers also filed a motion for a preliminary injunction that would require Anthropic to implement effective “guardrails” in its Claude AI models to prevent outputs that infringe plaintiffs’ copyrighted lyrics and preclude Anthropic from creating or using unauthorized copies of those lyrics to train future AI models. 

Last week, in Vidal v. Elster, the Supreme Court upheld the Lanham Act’s prohibition against registering a trademark that includes a living person’s name without their consent.[1]  This case is the latest in a trilogy of challenges to the constitutionality of trademark registration bars in the Lanham Act.  The Court previously struck down as unconstitutional the clauses in Section 2(c) prohibiting registration of marks constituting “disparagement” and “immoral or scandalous matter.”[2]  In a departure from those decisions, the Court upheld the U.S. Patent and Trademark Office’s refusal to register a trademark for “Trump Too Small”—a piece of political commentary that the applicant sought to use on apparel to criticize a government official.  The Court reasoned that, unlike the other provisions, the “names” prohibition is viewpoint-neutral, and thus does not violate any First Amendment right. 

In a recent en banc decision concerning the standard for assessing obviousness challenges to design patents, the United States Court of Appeals for the Federal Circuit discarded its long-standing standard, known as the Rosen-Durling test and regarded by many as overly-rigid, and held that the standard for design patents should be the same as for utility patents.  The decision in LKQ Corporation v. GM Global Technology Operations LLC[1] will have significant implications for design patent applicants and owners going forward.

Late last month, the Department of Commerce’s National Institute of Standards and Technology (“NIST”) released four draft publications regarding actions taken by the agency following President Biden’s executive order on AI (the “Order”; see our prior alert here)[1] and call for action within six months of the Order.  Adding to NIST’s mounting portfolio of AI-related guidance, these publications reflect months of research focused on identifying risks associated with the use of artificial intelligence (“AI”) systems and promoting the central goal of the Order: improving the safety, security and trustworthiness of AI.  The four draft documents, further described below, are titled:

Yesterday, the Supreme Court denied certiorari in Hearst Newspapers, LLC v. Martinelli, declining to determine whether the “discovery rule” applies in Copyright Act infringement cases and under what circumstances.  As a result, most circuits will continue to apply the rule to determine when an infringement claim accrues for purposes of applying the Copyright Act’s three-year statute of limitations.