The following is part of our annual publication Selected Issues for Boards of Directors in 2026. Explore all topics or download the PDF.


The U.S. regulatory and enforcement landscape for digital assets and distributed ledger technology changed dramatically in 2025. Virtually overnight, U.S. regulators shifted from an enforcement-heavy crypto-skepticism that effectively outlawed the participation of traditional financial institutions in digital asset and tokenization markets and threatened the core business of many fintech companies (Fintechs), to a determined focus on flexibility for market participants to engage with digital assets and distributed ledger technology. Most notably in 2025:

The following is part of our annual publication Selected Issues for Boards of Directors in 2026. Explore all topics or download the PDF.


Over the last nine years, during which I served as Cleary Gottlieb’s Managing Partner, there have been significant, often unexpected, changes in global politics, global markets and the legal industry.

As states continue to grapple with establishing regulatory frameworks for the most powerful artificial intelligence (“AI”) systems, New York has joined California in targeting frontier AI models with the Responsible AI Safety and Education Act (the “RAISE Act” or the “Act”).[1] Signed into law on December 19, 2025 by Governor Hochul, the Act creates a comprehensive regulatory framework for developers of the most advanced AI systems, marking New York’s entry into the vanguard of state AI safety regulation.

On 10 October 2025, Law No. 132/2025 (the “Italian AI Law”) entered into force, making Italy the first EU Member State to introduce a dedicated and comprehensive national framework for artificial intelligence (“AI”). The law references the AI Act (Regulation (EU) 2024/1689) and grants the government broad powers to implement its principles and establish detailed operational rules. It also sets out the institutional structure responsible for overseeing AI in Italy, mandating to specific authorities the promotion, coordination, and supervision of this strategically important sector.

On 9 December 2023, trilogue negotiations on the EU’s Artificial Intelligence (“AI”) Act reached a key inflection point, with a provisional political agreement reached between the European Parliament and Council.  As we wait for the consolidated legislative text to be finalised and formally approved, below we set out the key points businesses need to know about the political deal and what comes next.

This is the first part of series on using synthetic data to train AI models. See here for Parts 23, and 4.

The recent rapid advancements of Artificial Intelligence (“AI”) have revolutionized creation and learning patterns. Generative AI (“GenAI”) systems have unveiled unprecedented capabilities, pushing the boundaries of what we thought possible. Yet, beneath the surface of the transformative potential of AI lies a complex legal web of intellectual property (“IP”) risks, particularly concerning the use of “real-world” training data, which may lead to alleged infringement of third-party IP rights if AI training data is not appropriately sourced.

On October 19, 2023, the U.S. Copyright Office announced in the Federal Register that it will consider a proposed exemption to the Digital Millennium Copyright Act’s (“DMCA”) anti-circumvention provisions which prohibit the circumvention of any technological measures used to prevent unauthorized access to copyrighted works.  The exemption would allow those researching bias in artificial intelligence (“AI”) to bypass any technological measures that limit the use of copyrighted generative AI models.