Photo of Marcela Robledo

Marcela Robledo’s practice focuses on the intellectual property, data, and technology aspects involved in a wide range of corporate and transactional matters, including mergers and acquisitions, licensing, collaboration agreements, and joint ventures.

The following is part of our annual publication Selected Issues for Boards of Directors in 2025Explore all topics or download the PDF.


Deployment of generative AI expanded rapidly across many industries in 2024, leading to broadly increased productivity, return on investment and other benefits. At the same time, AI was also a focus for lawmakers, regulators and courts. There are currently 27 active generative AI litigation cases in the U.S., nearly all of which involve copyright claims. Numerous state legislatures have mulled AI regulation, and Colorado became the first and only state thus far to pass a law creating a broad set of obligations for certain developers and deployers of AI.

On July 26th, the National Institute of Standards and Technology (“NIST”) released its Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (the “Profile”),[1] laying out more than 200 suggested actions to mitigate the risks of generative artificial intelligence (“Gen AI”).  This Profile is a companion to NIST’s Artificial Intelligence Risk Management Framework (the “Framework”), which was released in January of 2023.[2]  The Framework aims to act as a resource for entities dealing with all manner of Gen AI systems to help them manage risks and promote trustworthy and responsible development of AI.  The Profile is intended to be an implementation of the Framework, providing concrete steps to manage AI risks.  

As inventors, attorneys and patent examiners grapple with the impacts of AI on patents, the United States Patent and Trademark Office (the “USPTO”) has released guidance concerning the subject matter patent eligibility of inventions that relate to AI technology.[1]  The impetus for this guidance was President Biden’s Executive Order on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence, which directed the USPTO to issue guidance to patent examiners and applicants regarding patent subject matter eligibility in order to address innovation in AI and other critical and emerging technologies.  

Yesterday, the Supreme Court denied certiorari in Hearst Newspapers, LLC v. Martinelli, declining to determine whether the “discovery rule” applies in Copyright Act infringement cases and under what circumstances.  As a result, most circuits will continue to apply the rule to determine when an infringement claim accrues for purposes of applying the Copyright Act’s three-year statute of limitations.

There has been a push at the state and federal level to regulate AI-generated deepfakes that use the voices and likenesses of real people without their approval.  This legislative momentum stems from a series of high profile incidents involving deepfakes that garnered public attention and concern.  Last year, an AI-generated song entitled “Heart on My Sleeve” simulated the voices of recording artists Drake and The Weeknd.  The song briefly went viral before being pulled from streaming services following objections from the artists’ music label.  Another incident involved an advertisement for dental services that used an AI-generated Tom Hanks to make the sales pitch.  As AI becomes more sophisticated and accessible to the general public, it has raised concerns over the misappropriation of people’s personas.  In recent months, several states have introduced legislation targeting the use of deepfakes to spread election-related misinformation.  At the federal level, both the House and Senate are considering a federal right of publicity that would give individuals a private right of action.  At the state level, Tennessee has become the first state update its right of publicity laws targeted towards the music industry, signing the Ensuring Likeness Voice and Image Security (the “ELVIS Act”) into law on March 21, 2024, which takes effect July 1, 2024.

The following post was originally included as part of our recently published memorandum “Selected Issues for Boards of Directors in 2024”.

Artificial Intelligence (AI), and in particular, generative AI, will continue to be an issue in the year to come, as new laws and regulations, agency guidance, continuing and additional litigation on AI and new AI-related partnerships will prompt headlines and require companies to continually think about these issues.

On November 28, 2023, U.S. District Judge Fred W. Slaughter for the Central District of California granted motions for summary judgment against a screenwriter’s claims that the creation of Ad Astra, the 2019 Brad Pitt film, had infringed on a script he had written.[1]  The Court reasoned that the defendant companies could not have possibly copied the script in question, as they did not have access to the script until after Ad Astra was written.  Additionally, the court stated, the two films were significantly different so as to conclude there was no infringement, even if it could be shown there was access.  

In an opinion issued December 4, 2023, the U.S. Court of Appeals for the Federal Circuit[1] reversed a lower court’s denial of Intel Corporation’s (“Intel’s”) motion for leave to amend its answer to assert a new license defense in a patent infringement suit brought by VLSI Technology LLC (“VLSI”).  The decision paves the way for Intel to make the case that it received a license to VLSI’s patents when a company that Intel had an existing license with became affiliated with VLSI due to its acquisition by an investment management firm.

On October 30, 2023, the Biden Administration issued a landmark Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the “Order”), directing the establishment of new standards for artificial intelligence (“AI”) safety and security and laying the foundation to ensure the protection of Americans’ privacy and civil rights, support for American workers, promotion of responsible innovation, competition and collaboration, while advancing America’s role as a world leader with respect to AI.