On July 26th, the National Institute of Standards and Technology (“NIST”) released its Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (the “Profile”),[1] laying out more than 200 suggested actions to mitigate the risks of generative artificial intelligence (“Gen AI”). This Profile is a companion to NIST’s Artificial Intelligence Risk Management Framework (the “Framework”), which was released in January of 2023.[2] The Framework aims to act as a resource for entities dealing with all manner of Gen AI systems to help them manage risks and promote trustworthy and responsible development of AI. The Profile is intended to be an implementation of the Framework, providing concrete steps to manage AI risks.
Digital Technology
The AI Act has been published in the EU Official Journal
1. Background: three years of legislative debate
Today, on July 12, 2024, EU Regulation No. 1689/2024 laying down harmonized rules on Artificial Intelligence (“Regulation” or “AI Act”) was finally published in the EU Official Journal and will enter into force on August 1, 2024. This milestone is the culmination of three years of legislative debate since the EU Commission’s first proposal for a comprehensive EU regulation on AI in April 2021. [1]
Fourth Circuit Vacates $1 Billion Damages Award in Music Piracy Lawsuit
Last week the Fourth Circuit reversed a $1 billion copyright verdict against an internet service provider and ordered a new trial on damages allegedly arising from illegal music downloads by its subscribers. In Sony Music Entertainment et al. v. Cox Communications Inc. et al.,[1] a group of music producers belonging to the Recording Industry Association of America brought suit against Cox for contributory and vicarious copyright infringement based on allegations that Cox induced and encouraged rampant infringement on its service. In 2019, a jury found Cox liable on both theories for infringement of 10,017 copyrighted works and awarded $99,830.29 per work, for a total of $1 billion in statutory damages. On appeal, the Fourth Circuit issued a mixed ruling – upholding the finding of contributory infringement but reversing the vicarious liability verdict and remanding for a new trial on damages.
Nexus of AI, AI Regulation and Dispute Resolution
The rapid development of AI is introducing new opportunities and challenges to dispute resolution. AI is already impacting the document review and production process, legal research, and the drafting of court submissions. It is expected that the use of AI will expand into other areas, including predicting case outcomes and adjudicating disputes. However, the use of AI in litigation also bears risk, as highlighted by a recent First-tier Tribunal (Tax) decision, where an appellant had sought to rely on precedent authorities that, in fact, were fabricated by AI (a known risk with AI using large language models, referred to as hallucination).[1] While, in this particular case, no further consequences seemed to follow (in light of the fact that the appellant, a litigant in person, “had been unaware that the AI cases were not genuine and that she did not know how to check their validity”[2]), the Tribunal did highlight that “providing authorities which are not genuine and asking a court or tribunal to rely on them is a serious and important issue”,[3] suggesting that litigants may incur certain risks by relying on authorities suggested by AI, unless these are independently verified. On 12 December 2023, a group of senior judges, including the Master of the Rolls and the Lady Chief Justice, issued guidance on AI for judicial office holders, which, amongst other things, discourages the use of AI for legal research and analysis and highlights the risk of AI being relied on by litigants to provide legal advice and/or to produce evidence.[4]
Training AI models on Synthetic Data: No silver bullet for IP infringement risk in the context of training AI systems (Part 4 of 4)
Quantum Computing and the Financial Sector: World Economic Forum Lays Out Roadmap Towards Quantum Security
Quantum technology is seen as having the potential to revolutionize many aspects of technology, the economy and society, including the financial sector. At the same time, this technology represents a significant threat to cybersecurity, especially due to its potential to render most current encryption schemes obsolete.
Training AI models on Synthetic Data: No silver bullet for IP infringement risk in the context of training AI systems (Part 2 of 4)
This second part of our four-part series on using synthetic data to train AI models explores how the use of synthetic data training sets may mitigate copyright infringement risks under EU law.
Training AI models on Synthetic Data: No silver bullet for IP infringement risk in the context of training AI systems (Part 1 of 4)
This is the first part of series on using synthetic data to train AI models. See here for Parts 2, 3, and 4.
The recent rapid advancements of Artificial Intelligence (“AI”) have revolutionized creation and learning patterns. Generative AI (“GenAI”) systems have unveiled unprecedented capabilities, pushing the boundaries of what we thought possible. Yet, beneath the surface of the transformative potential of AI lies a complex legal web of intellectual property (“IP”) risks, particularly concerning the use of “real-world” training data, which may lead to alleged infringement of third-party IP rights if AI training data is not appropriately sourced.
EU’s Approach to Data and AI Continues to Evolve
EU legislators are moving forward with new legislation on regulating the access to and use of data generated through use…
Copyright Office Considers New DMCA Carveout for AI Anti-Bias Research
On October 19, 2023, the U.S. Copyright Office announced in the Federal Register that it will consider a proposed exemption to the Digital Millennium Copyright Act’s (“DMCA”) anti-circumvention provisions which prohibit the circumvention of any technological measures used to prevent unauthorized access to copyrighted works. The exemption would allow those researching bias in artificial intelligence (“AI”) to bypass any technological measures that limit the use of copyrighted generative AI models.