For some time, the EU has been contemplating the introduction of a new Directive to make it easier for consumers to bring claims for a range of alleged harms caused by AI systems, commonly referred to as the AI Liability Directive (the “AILD”).
AI Regulation
NIST’s New Generative AI Profile: 200+ Ways to Manage the Risks of Generative AI
On July 26th, the National Institute of Standards and Technology (“NIST”) released its Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (the “Profile”),[1] laying out more than 200 suggested actions to mitigate the risks of generative artificial intelligence (“Gen AI”). This Profile is a companion to NIST’s Artificial Intelligence Risk Management Framework (the “Framework”), which was released in January of 2023.[2] The Framework aims to act as a resource for entities dealing with all manner of Gen AI systems to help them manage risks and promote trustworthy and responsible development of AI. The Profile is intended to be an implementation of the Framework, providing concrete steps to manage AI risks.
The AI Act has been published in the EU Official Journal
1. Background: three years of legislative debate
Today, on July 12, 2024, EU Regulation No. 1689/2024 laying down harmonized rules on Artificial Intelligence (“Regulation” or “AI Act”) was finally published in the EU Official Journal and will enter into force on August 1, 2024. This milestone is the culmination of three years of legislative debate since the EU Commission’s first proposal for a comprehensive EU regulation on AI in April 2021. [1]
State and Federal Legislation Target AI Deepfakes
There has been a push at the state and federal level to regulate AI-generated deepfakes that use the voices and likenesses of real people without their approval. This legislative momentum stems from a series of high profile incidents involving deepfakes that garnered public attention and concern. Last year, an AI-generated song entitled “Heart on My Sleeve” simulated the voices of recording artists Drake and The Weeknd. The song briefly went viral before being pulled from streaming services following objections from the artists’ music label. Another incident involved an advertisement for dental services that used an AI-generated Tom Hanks to make the sales pitch. As AI becomes more sophisticated and accessible to the general public, it has raised concerns over the misappropriation of people’s personas. In recent months, several states have introduced legislation targeting the use of deepfakes to spread election-related misinformation. At the federal level, both the House and Senate are considering a federal right of publicity that would give individuals a private right of action. At the state level, Tennessee has become the first state update its right of publicity laws targeted towards the music industry, signing the Ensuring Likeness Voice and Image Security (the “ELVIS Act”) into law on March 21, 2024, which takes effect July 1, 2024.
UPSTO Issues “Significant” Guidance on Patentability of AI-Assisted Inventions, but unlike USCO, Does Not Require Disclosure of AI Involvement
The United States Patent and Trademark Office (“USPTO”) issued guidance on February 13, 2024 (the “Guidance”) regarding the patentability of inventions created or developed with the assistance of artificial intelligence (“AI”), a novel issue on which the USPTO has been seeking input from various public and private stakeholders over the past few years. President Biden mandated the issuance of such Guidance in his executive order on AI (see our prior alert here)[1] in October 2023. The Guidance aims to clarify how patent applications involving AI-assisted inventions will be examined by patent examiners, and reaffirms the existing jurisprudence maintaining that only natural persons, not AI tools, can be listed as inventors. However, the Guidance clarifies that AI-assisted inventions are not automatically ineligible for patent protection so long as one or more natural persons “significantly contributed” to the invention. Overall, the Guidance underscores the need for a balanced approach to inventorship that acknowledges both technological advancements and human innovation. The USPTO is seeking public feedback on the Guidance, which is due by May 13, 2024.
Nexus of AI, AI Regulation and Dispute Resolution
The rapid development of AI is introducing new opportunities and challenges to dispute resolution. AI is already impacting the document review and production process, legal research, and the drafting of court submissions. It is expected that the use of AI will expand into other areas, including predicting case outcomes and adjudicating disputes. However, the use of AI in litigation also bears risk, as highlighted by a recent First-tier Tribunal (Tax) decision, where an appellant had sought to rely on precedent authorities that, in fact, were fabricated by AI (a known risk with AI using large language models, referred to as hallucination).[1] While, in this particular case, no further consequences seemed to follow (in light of the fact that the appellant, a litigant in person, “had been unaware that the AI cases were not genuine and that she did not know how to check their validity”[2]), the Tribunal did highlight that “providing authorities which are not genuine and asking a court or tribunal to rely on them is a serious and important issue”,[3] suggesting that litigants may incur certain risks by relying on authorities suggested by AI, unless these are independently verified. On 12 December 2023, a group of senior judges, including the Master of the Rolls and the Lady Chief Justice, issued guidance on AI for judicial office holders, which, amongst other things, discourages the use of AI for legal research and analysis and highlights the risk of AI being relied on by litigants to provide legal advice and/or to produce evidence.[4]
Training AI models on Synthetic Data: No silver bullet for IP infringement risk in the context of training AI systems (Part 4 of 4)
Training AI models on Synthetic Data: No silver bullet for IP infringement risk in the context of training AI systems (Part 2 of 4)
This second part of our four-part series on using synthetic data to train AI models explores how the use of synthetic data training sets may mitigate copyright infringement risks under EU law.
Agreement reached on the EU AI Act: the key points to know about the political deal
On 9 December 2023, trilogue negotiations on the EU’s Artificial Intelligence (“AI”) Act reached a key inflection point, with a provisional political agreement reached between the European Parliament and Council. As we wait for the consolidated legislative text to be finalised and formally approved, below we set out the key points businesses need to know about the political deal and what comes next.
Training AI models on Synthetic Data: No silver bullet for IP infringement risk in the context of training AI systems (Part 1 of 4)
This is the first part of series on using synthetic data to train AI models. See here for Parts 2, 3, and 4.
The recent rapid advancements of Artificial Intelligence (“AI”) have revolutionized creation and learning patterns. Generative AI (“GenAI”) systems have unveiled unprecedented capabilities, pushing the boundaries of what we thought possible. Yet, beneath the surface of the transformative potential of AI lies a complex legal web of intellectual property (“IP”) risks, particularly concerning the use of “real-world” training data, which may lead to alleged infringement of third-party IP rights if AI training data is not appropriately sourced.