The following is part of our annual publication Selected Issues for Boards of Directors in 2026. Explore all topics or download the PDF.


Over the last nine years, during which I served as Cleary Gottlieb’s Managing Partner, there have been significant, often unexpected, changes in global politics, global markets and the legal industry.

As states continue to grapple with establishing regulatory frameworks for the most powerful artificial intelligence (“AI”) systems, New York has joined California in targeting frontier AI models with the Responsible AI Safety and Education Act (the “RAISE Act” or the “Act”).[1] Signed into law on December 19, 2025 by Governor Hochul, the Act creates a comprehensive regulatory framework for developers of the most advanced AI systems, marking New York’s entry into the vanguard of state AI safety regulation.

For more insights and analysis from Cleary lawyers on policy and regulatory developments from a legal perspective, visit What to Expect From a Second Trump Administration.

On December 11, 2025, President Donald Trump signed an executive order titled Establishing A National Policy Framework For Artificial Intelligence (the “Order”)[1]. The Order’s policy objective is to “enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI”[2] and comes after Congress considered but did not advance federal legislation that would have preempted state AI regulation earlier this year. The Order justifies federal intervention on three grounds:

On September 29, 2025, Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act (TFAIA, SB 53 or the Act)[1], establishing a comprehensive framework for transparency, safety and accountability in the development and deployment of the most advanced artificial intelligence models. Building upon existing California laws targeting AI such as AB 2013[2], the Act, which takes effect January 1, 2026 and imposes penalties up to $1 million per violation, creates immediate compliance obligations for AI developers of the most powerful frontier models.

The EU AI Act’s phased implementation continues: from 2 August 2025, the AI Act’s rules on general-purpose AI (GPAI) will enter into force (and become enforceable in respect of any new GPAI models in August 2026 and for existing GPAI models in August 2027). This milestone follows the recent publication of (non-binding) guidelines[1] developed by the European Commission to clarify the scope of the obligations on providers of GPAI models (the “GPAI Guidelines”), and the release of the final version of the General-Purpose AI Code of Practice.[2]

This is the final part of our four-part series on the EUIPO study on GenAI and copyright. Read parts 12, and 3.

The EUIPO study provides detailed insights into the evolving relationship between GenAI and copyright law, highlighting both the complex challenges and emerging solutions in this rapidly developing field. As discussed in the previous parts of this series, the study addresses crucial issues at both the training (input) and deployment (output) stages of GenAI systems.

As of July 8, the U.S. Department of Justice (“DOJ”) is scheduled to begin full enforcement of its Data Security Program (“DSP”) and the recently issued Bulk Data Rule after its 90-day limited enforcement policy expires, ushering in “full compliance” requirements for U.S. companies and individuals.[1] 

This is the third part of our four-part series on the EUIPO study on GenAI and copyright. Read parts 1, 2, and 4.

This third part of the four-part series offers four key takeaways on GenAI output, highlighting critical issues around retrieval augmented generation (RAG), transparency solutions, copyright retention concerns and emerging technical remedies.

This is the second part of our four-part series on the EUIPO study on GenAI and copyright. Read parts 1, 3, and 4.

In this second part of our four-part series exploring the EUIPO study on GenAI and copyright, we set out our key takeaways regarding GenAI inputs, including findings on the evolving interpretation of the legal text and data mining (TDM) rights reservation regime and existing opt-out measures.