Last week a Georgia state court granted summary judgment in favor of OpenAI, ending a closely watched defamation lawsuit over false information—sometimes called “hallucinations”—generated by its generative AI product, ChatGPT.  The plaintiff, Mark Walters, is a nationally syndicated radio host and prominent gun rights advocate who sued OpenAI after ChatGPT produced output incorrectly stating that he had been accused of embezzlement in a lawsuit filed by the Second Amendment Foundation (“SAF”).  Walters is not, and never was, a party to that case. 

On 3 February 2025, the European Commission (“EC”) published an updated version of its frequently asked questions (“FAQs”) on the EU Data Act.[1]  The Data Act, which is intended to make data more accessible to users of IoT devices in the EU, entered into force on 11 January 2024 and will become generally applicable as of 12 September 2025.

The following is part of our annual publication Selected Issues for Boards of Directors in 2025Explore all topics or download the PDF.


Deployment of generative AI expanded rapidly across many industries in 2024, leading to broadly increased productivity, return on investment and other benefits. At the same time, AI was also a focus for lawmakers, regulators and courts. There are currently 27 active generative AI litigation cases in the U.S., nearly all of which involve copyright claims. Numerous state legislatures have mulled AI regulation, and Colorado became the first and only state thus far to pass a law creating a broad set of obligations for certain developers and deployers of AI.

On 5 September  2024, the EU, UK and US joined seven other states[1] in signing the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (“Treaty”) – the first international treaty governing the safe use of artificial intelligence (‘‘AI’’).[2] The Treaty remains subject to ratification, acceptance or approval by each signatory and will enter into force on the first day of the month following a period of three months after the date on which five signatories, including at least three Council of Europe member states, have ratified it. Any state worldwide is eligible to join the Treaty, subject to the unanimous approval of the signatories, and to commit to complying with its provisions. The Treaty is expected to have a positive impact on international cooperation on addressing AI-related risks.

On July 26th, the National Institute of Standards and Technology (“NIST”) released its Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (the “Profile”),[1] laying out more than 200 suggested actions to mitigate the risks of generative artificial intelligence (“Gen AI”).  This Profile is a companion to NIST’s Artificial Intelligence Risk Management Framework (the “Framework”), which was released in January of 2023.[2]  The Framework aims to act as a resource for entities dealing with all manner of Gen AI systems to help them manage risks and promote trustworthy and responsible development of AI.  The Profile is intended to be an implementation of the Framework, providing concrete steps to manage AI risks.  

As inventors, attorneys and patent examiners grapple with the impacts of AI on patents, the United States Patent and Trademark Office (the “USPTO”) has released guidance concerning the subject matter patent eligibility of inventions that relate to AI technology.[1]  The impetus for this guidance was President Biden’s Executive Order on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence, which directed the USPTO to issue guidance to patent examiners and applicants regarding patent subject matter eligibility in order to address innovation in AI and other critical and emerging technologies.  

On 2 July, the French data protection supervisory authority – Commission Nationale de l’Informatique et des Libertés (CNIL) – launched a new public consultation on the development of AI systems. The public consultation is on (i) a new series of how-to sheets aimed at providing clarifications and recommendations with respect to seven issues related to the development of AI and data protection and (ii) a questionnaire on applying the GDPR to AI models trained with personal data. Below we set out a summary of the main takeaways.