Less than a year after the EU AI Act[1] entered into force on 1 August 2024, its provisions on AI literacy and prohibited AI practices started to apply from 2 February 2025.[2]

This third part of our four-part series on using synthetic data to train AI models explores the interplay between synthetic data training sets, the EU Copyright Directive and the forthcoming EU AI Act.
This second part of our four-part series on using synthetic data to train AI models explores how the use of synthetic data training sets may mitigate copyright infringement risks under EU law.
On 9 December 2023, trilogue negotiations on the EU’s Artificial Intelligence (“AI”) Act reached a key inflection point, with a provisional political agreement reached between the European Parliament and Council. As we wait for the consolidated legislative text to be finalised and formally approved, below we set out the key points businesses need to know about the political deal and what comes next.
This is the first part of series on using synthetic data to train AI models. See here for Parts 2, 3, and 4.
The recent rapid advancements of Artificial Intelligence (“AI”) have revolutionized creation and learning patterns. Generative AI (“GenAI”) systems have unveiled unprecedented capabilities, pushing the boundaries of what we thought possible. Yet, beneath the surface of the transformative potential of AI lies a complex legal web of intellectual property (“IP”) risks, particularly concerning the use of “real-world” training data, which may lead to alleged infringement of third-party IP rights if AI training data is not appropriately sourced.
EU legislators are moving forward with new legislation on regulating the access to and use of data generated through use…
On October 30, 2023, the G7 Leaders published a Statement on the Hiroshima Artificial Intelligence (“AI”) Process (the “Statement”).[1] This follows the G7 Summit in May, where the leaders agreed on the need to address the risks arising from rapidly evolving AI technologies. The Statement was accompanied by the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems (the “Code of Conduct”)[2] and the Hiroshima Process International Guiding Principles for Advanced AI Systems (the “Guiding Principles”).[3]
As we continue to see the rapid development of digital technologies, such as artificial intelligence (“AI”) tools, legislators around the world are contemplating how best to regulate these technologies. In the UK, the Government has adopted a “pro-innovation” agenda, with the aim of making the UK “an attractive destination for R&D projects, manufacturing and investment, and ensuring [the UK] can realise the economic and social benefits of new technologies as quickly as possible.”[1]
On 15 March 2023, the UK ICO published an update to its Guidance on AI and Data Protection (the “Guidance”), following requests from the UK industry to clarify requirements for fairness in artificial intelligence (“AI”). The Guidance contains advice on the interpretation of relevant data protection law as it applies to AI, and recommendations on good practice for organisational and technical measures to mitigate risks caused by AI.
WE VALUE YOUR PRIVACY
This site uses cookies and full details are set out in our Cookie Policy. Essential Cookies are always on; to accept Analytics Cookies, click "I agree to all cookies."