On 10 October 2025, Law No. 132/2025 (the “Italian AI Law”) entered into force, making Italy the first EU Member State to introduce a dedicated and comprehensive national framework for artificial intelligence (“AI”). The law references the AI Act (Regulation (EU) 2024/1689) and grants the government broad powers to implement its principles and establish detailed operational rules. It also sets out the institutional structure responsible for overseeing AI in Italy, mandating to specific authorities the promotion, coordination, and supervision of this strategically important sector.

On September 29, 2025, Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act (TFAIA, SB 53 or the Act)[1], establishing a comprehensive framework for transparency, safety and accountability in the development and deployment of the most advanced artificial intelligence models. Building upon existing California laws targeting AI such as AB 2013[2], the Act, which takes effect January 1, 2026 and imposes penalties up to $1 million per violation, creates immediate compliance obligations for AI developers of the most powerful frontier models.

The EU AI Act’s phased implementation continues: from 2 August 2025, the AI Act’s rules on general-purpose AI (GPAI) will enter into force (and become enforceable in respect of any new GPAI models in August 2026 and for existing GPAI models in August 2027). This milestone follows the recent publication of (non-binding) guidelines[1] developed by the European Commission to clarify the scope of the obligations on providers of GPAI models (the “GPAI Guidelines”), and the release of the final version of the General-Purpose AI Code of Practice.[2]

This is the final part of our four-part series on the EUIPO study on GenAI and copyright. Read parts 12, and 3.

The EUIPO study provides detailed insights into the evolving relationship between GenAI and copyright law, highlighting both the complex challenges and emerging solutions in this rapidly developing field. As discussed in the previous parts of this series, the study addresses crucial issues at both the training (input) and deployment (output) stages of GenAI systems.

As of July 8, the U.S. Department of Justice (“DOJ”) is scheduled to begin full enforcement of its Data Security Program (“DSP”) and the recently issued Bulk Data Rule after its 90-day limited enforcement policy expires, ushering in “full compliance” requirements for U.S. companies and individuals.[1] 

This is the third part of our four-part series on the EUIPO study on GenAI and copyright. Read parts 1, 2, and 4.

This third part of the four-part series offers four key takeaways on GenAI output, highlighting critical issues around retrieval augmented generation (RAG), transparency solutions, copyright retention concerns and emerging technical remedies.

This is the second part of our four-part series on the EUIPO study on GenAI and copyright. Read parts 1, 3, and 4.

In this second part of our four-part series exploring the EUIPO study on GenAI and copyright, we set out our key takeaways regarding GenAI inputs, including findings on the evolving interpretation of the legal text and data mining (TDM) rights reservation regime and existing opt-out measures.

Last week a Georgia state court granted summary judgment in favor of OpenAI, ending a closely watched defamation lawsuit over false information—sometimes called “hallucinations”—generated by its generative AI product, ChatGPT.  The plaintiff, Mark Walters, is a nationally syndicated radio host and prominent gun rights advocate who sued OpenAI after ChatGPT produced output incorrectly stating that he had been accused of embezzlement in a lawsuit filed by the Second Amendment Foundation (“SAF”).  Walters is not, and never was, a party to that case.