Photo of Prudence Buckland

This article was authored by Daniel Ilan, Rahul Mukhi, Prudence Buckland, and Melissa Faragasso from Cleary Gottlieb, and Brian Lichter and Elijah Seymour from Stroz Friedberg, a LevelBlue company.

Recent disclosures by Anthropic and OpenAI highlight a pivotal shift in the cyber threat landscape: AI is no longer merely a tool that aids attackers, in some cases, it has become the attacker itself. Together, these incidents illustrate immediate implications for corporate governance, contracting and security programs as companies integrate AI with their business systems. Below, we explain how these attacks were orchestrated and what steps businesses should consider given the rising cyber risks associated with the adoption of AI.

On November 4, 2025, the UK High Court handed down judgment in Getty Images v. Stability AI,[1] a case emphasized for its significance to content creators and the AI industry and “the balance to be struck between the two warring factions”.[2] Despite significant public interest in the lawsuit, the issues that remained before the court on the “diminished”[3] case were limited (after Getty abandoned its primary infringement claims during trial). The judgment dismisses Getty’s remaining claims of secondary copyright infringement. While some claims of trademark infringement asserted by Getty were upheld, Justice Joanna Smith DBE acknowledged the findings were “extremely limited in scope”.[4]

The EU AI Act’s phased implementation continues: from 2 August 2025, the AI Act’s rules on general-purpose AI (GPAI) will enter into force (and become enforceable in respect of any new GPAI models in August 2026 and for existing GPAI models in August 2027). This milestone follows the recent publication of (non-binding) guidelines[1] developed by the European Commission to clarify the scope of the obligations on providers of GPAI models (the “GPAI Guidelines”), and the release of the final version of the General-Purpose AI Code of Practice.[2]

This is the fourth and final part of our series on using synthetic data to train AI models. See here for Parts 1, 2 and 3.

This third part of our four-part series on using synthetic data to train AI models explores the interplay between synthetic data training sets, the EU Copyright Directive and the forthcoming EU AI Act.

This second part of our four-part series on using synthetic data to train AI models explores how the use of synthetic data training sets may mitigate copyright infringement risks under EU law.

On 9 December 2023, trilogue negotiations on the EU’s Artificial Intelligence (“AI”) Act reached a key inflection point, with a provisional political agreement reached between the European Parliament and Council.  As we wait for the consolidated legislative text to be finalised and formally approved, below we set out the key points businesses need to know about the political deal and what comes next.