There has been a push at the state and federal level to regulate AI-generated deepfakes that use the voices and likenesses of real people without their approval.  This legislative momentum stems from a series of high profile incidents involving deepfakes that garnered public attention and concern.  Last year, an AI-generated song entitled “Heart on My Sleeve” simulated the voices of recording artists Drake and The Weeknd.  The song briefly went viral before being pulled from streaming services following objections from the artists’ music label.  Another incident involved an advertisement for dental services that used an AI-generated Tom Hanks to make the sales pitch.  As AI becomes more sophisticated and accessible to the general public, it has raised concerns over the misappropriation of people’s personas.  In recent months, several states have introduced legislation targeting the use of deepfakes to spread election-related misinformation.  At the federal level, both the House and Senate are considering a federal right of publicity that would give individuals a private right of action.  At the state level, Tennessee has become the first state update its right of publicity laws targeted towards the music industry, signing the Ensuring Likeness Voice and Image Security (the “ELVIS Act”) into law on March 21, 2024, which takes effect July 1, 2024.

The United States Patent and Trademark Office (“USPTO”) issued guidance on February 13, 2024 (the “Guidance”) regarding the patentability of inventions created or developed with the assistance of artificial intelligence (“AI”), a novel issue on which the USPTO has been seeking input from various public and private stakeholders over the past few years.  President Biden mandated the issuance of such Guidance in his executive order on AI (see our prior alert here)[1] in October 2023.  The Guidance aims to clarify how patent applications involving AI-assisted inventions will be examined by patent examiners, and reaffirms the existing jurisprudence maintaining that only natural persons, not AI tools, can be listed as inventors.  However, the Guidance clarifies that AI-assisted inventions are not automatically ineligible for patent protection so long as one or more natural persons “significantly contributed” to the invention.  Overall, the Guidance underscores the need for a balanced approach to inventorship that acknowledges both technological advancements and human innovation.  The USPTO is seeking public feedback on the Guidance, which is due by May 13, 2024.

The rapid development of AI is introducing new opportunities and challenges to dispute resolution. AI is already impacting the document review and production process, legal research, and the drafting of court submissions. It is expected that the use of AI will expand into other areas, including predicting case outcomes and adjudicating disputes. However, the use of AI in litigation also bears risk, as highlighted by a recent First-tier Tribunal (Tax) decision, where an appellant had sought to rely on precedent authorities that, in fact, were fabricated by AI (a known risk with AI using large language models, referred to as hallucination).[1] While, in this particular case, no further consequences seemed to follow (in light of the fact that the appellant, a litigant in person, “had been unaware that the AI cases were not genuine and that she did not know how to check their validity[2]), the Tribunal did highlight that “providing authorities which are not genuine and asking a court or tribunal to rely on them is a serious and important issue”,[3] suggesting that litigants may incur certain risks by relying on authorities suggested by AI, unless these are independently verified. On 12 December 2023, a group of senior judges, including the Master of the Rolls and the Lady Chief Justice, issued guidance on AI for judicial office holders, which, amongst other things, discourages the use of AI for legal research and analysis and highlights the risk of AI being relied on by litigants to provide legal advice and/or to produce evidence.[4]

This second part of our four-part series on using synthetic data to train AI models explores how the use of synthetic data training sets may mitigate copyright infringement risks under EU law.

On 9 December 2023, trilogue negotiations on the EU’s Artificial Intelligence (“AI”) Act reached a key inflection point, with a provisional political agreement reached between the European Parliament and Council.  As we wait for the consolidated legislative text to be finalised and formally approved, below we set out the key points businesses need to know about the political deal and what comes next.

This is the first part of series on using synthetic data to train AI models. See here for Parts 23, and 4.

The recent rapid advancements of Artificial Intelligence (“AI”) have revolutionized creation and learning patterns. Generative AI (“GenAI”) systems have unveiled unprecedented capabilities, pushing the boundaries of what we thought possible. Yet, beneath the surface of the transformative potential of AI lies a complex legal web of intellectual property (“IP”) risks, particularly concerning the use of “real-world” training data, which may lead to alleged infringement of third-party IP rights if AI training data is not appropriately sourced.

On October 30, 2023, the Biden Administration issued a landmark Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the “Order”), directing the establishment of new standards for artificial intelligence (“AI”) safety and security and laying the foundation to ensure the protection of Americans’ privacy and civil rights, support for American workers, promotion of responsible innovation, competition and collaboration, while advancing America’s role as a world leader with respect to AI.

On October 19, 2023, the U.S. Copyright Office announced in the Federal Register that it will consider a proposed exemption to the Digital Millennium Copyright Act’s (“DMCA”) anti-circumvention provisions which prohibit the circumvention of any technological measures used to prevent unauthorized access to copyrighted works.  The exemption would allow those researching bias in artificial intelligence (“AI”) to bypass any technological measures that limit the use of copyrighted generative AI models.