As states continue to grapple with establishing regulatory frameworks for the most powerful artificial intelligence (“AI”) systems, New York has joined California in targeting frontier AI models with the Responsible AI Safety and Education Act (the “RAISE Act” or the “Act”).[1] Signed into law on December 19, 2025 by Governor Hochul, the Act creates a comprehensive regulatory framework for developers of the most advanced AI systems, marking New York’s entry into the vanguard of state AI safety regulation.

This article was authored by Daniel Ilan, Rahul Mukhi, Prudence Buckland, and Melissa Faragasso from Cleary Gottlieb, and Brian Lichter and Elijah Seymour from Stroz Friedberg, a LevelBlue company.

Recent disclosures by Anthropic and OpenAI highlight a pivotal shift in the cyber threat landscape: AI is no longer merely a tool that aids attackers, in some cases, it has become the attacker itself. Together, these incidents illustrate immediate implications for corporate governance, contracting and security programs as companies integrate AI with their business systems. Below, we explain how these attacks were orchestrated and what steps businesses should consider given the rising cyber risks associated with the adoption of AI.

On November 4, 2025, the UK High Court handed down judgment in Getty Images v. Stability AI,[1] a case emphasized for its significance to content creators and the AI industry and “the balance to be struck between the two warring factions”.[2] Despite significant public interest in the lawsuit, the issues that remained before the court on the “diminished”[3] case were limited (after Getty abandoned its primary infringement claims during trial). The judgment dismisses Getty’s remaining claims of secondary copyright infringement. While some claims of trademark infringement asserted by Getty were upheld, Justice Joanna Smith DBE acknowledged the findings were “extremely limited in scope”.[4]

On September 29, 2025, Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act (TFAIA, SB 53 or the Act)[1], establishing a comprehensive framework for transparency, safety and accountability in the development and deployment of the most advanced artificial intelligence models. Building upon existing California laws targeting AI such as AB 2013[2], the Act, which takes effect January 1, 2026 and imposes penalties up to $1 million per violation, creates immediate compliance obligations for AI developers of the most powerful frontier models.

The EU AI Act’s phased implementation continues: from 2 August 2025, the AI Act’s rules on general-purpose AI (GPAI) will enter into force (and become enforceable in respect of any new GPAI models in August 2026 and for existing GPAI models in August 2027). This milestone follows the recent publication of (non-binding) guidelines[1] developed by the European Commission to clarify the scope of the obligations on providers of GPAI models (the “GPAI Guidelines”), and the release of the final version of the General-Purpose AI Code of Practice.[2]

This is the final part of our four-part series on the EUIPO study on GenAI and copyright. Read parts 12, and 3.

The EUIPO study provides detailed insights into the evolving relationship between GenAI and copyright law, highlighting both the complex challenges and emerging solutions in this rapidly developing field. As discussed in the previous parts of this series, the study addresses crucial issues at both the training (input) and deployment (output) stages of GenAI systems.

This is the third part of our four-part series on the EUIPO study on GenAI and copyright. Read parts 1, 2, and 4.

This third part of the four-part series offers four key takeaways on GenAI output, highlighting critical issues around retrieval augmented generation (RAG), transparency solutions, copyright retention concerns and emerging technical remedies.

This is the second part of our four-part series on the EUIPO study on GenAI and copyright. Read parts 1, 3, and 4.

In this second part of our four-part series exploring the EUIPO study on GenAI and copyright, we set out our key takeaways regarding GenAI inputs, including findings on the evolving interpretation of the legal text and data mining (TDM) rights reservation regime and existing opt-out measures.