As states continue to grapple with establishing regulatory frameworks for the most powerful artificial intelligence (“AI”) systems, New York has joined California in targeting frontier AI models with the Responsible AI Safety and Education Act (the “RAISE Act” or the “Act”).[1] Signed into law on December 19, 2025 by Governor Hochul, the Act creates a comprehensive regulatory framework for developers of the most advanced AI systems, marking New York’s entry into the vanguard of state AI safety regulation.

For more insights and analysis from Cleary lawyers on policy and regulatory developments from a legal perspective, visit What to Expect From a Second Trump Administration.

On December 11, 2025, President Donald Trump signed an executive order titled Establishing A National Policy Framework For Artificial Intelligence (the “Order”)[1]. The Order’s policy objective is to “enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI”[2] and comes after Congress considered but did not advance federal legislation that would have preempted state AI regulation earlier this year. The Order justifies federal intervention on three grounds:

On September 29, 2025, Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act (TFAIA, SB 53 or the Act)[1], establishing a comprehensive framework for transparency, safety and accountability in the development and deployment of the most advanced artificial intelligence models. Building upon existing California laws targeting AI such as AB 2013[2], the Act, which takes effect January 1, 2026 and imposes penalties up to $1 million per violation, creates immediate compliance obligations for AI developers of the most powerful frontier models.

As of July 8, the U.S. Department of Justice (“DOJ”) is scheduled to begin full enforcement of its Data Security Program (“DSP”) and the recently issued Bulk Data Rule after its 90-day limited enforcement policy expires, ushering in “full compliance” requirements for U.S. companies and individuals.[1] 

Last week a Georgia state court granted summary judgment in favor of OpenAI, ending a closely watched defamation lawsuit over false information—sometimes called “hallucinations”—generated by its generative AI product, ChatGPT.  The plaintiff, Mark Walters, is a nationally syndicated radio host and prominent gun rights advocate who sued OpenAI after ChatGPT produced output incorrectly stating that he had been accused of embezzlement in a lawsuit filed by the Second Amendment Foundation (“SAF”).  Walters is not, and never was, a party to that case. 

On 5 September  2024, the EU, UK and US joined seven other states[1] in signing the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (“Treaty”) – the first international treaty governing the safe use of artificial intelligence (‘‘AI’’).[2] The Treaty remains subject to ratification, acceptance or approval by each signatory and will enter into force on the first day of the month following a period of three months after the date on which five signatories, including at least three Council of Europe member states, have ratified it. Any state worldwide is eligible to join the Treaty, subject to the unanimous approval of the signatories, and to commit to complying with its provisions. The Treaty is expected to have a positive impact on international cooperation on addressing AI-related risks.

On July 26th, the National Institute of Standards and Technology (“NIST”) released its Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (the “Profile”),[1] laying out more than 200 suggested actions to mitigate the risks of generative artificial intelligence (“Gen AI”).  This Profile is a companion to NIST’s Artificial Intelligence Risk Management Framework (the “Framework”), which was released in January of 2023.[2]  The Framework aims to act as a resource for entities dealing with all manner of Gen AI systems to help them manage risks and promote trustworthy and responsible development of AI.  The Profile is intended to be an implementation of the Framework, providing concrete steps to manage AI risks.  

As inventors, attorneys and patent examiners grapple with the impacts of AI on patents, the United States Patent and Trademark Office (the “USPTO”) has released guidance concerning the subject matter patent eligibility of inventions that relate to AI technology.[1]  The impetus for this guidance was President Biden’s Executive Order on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence, which directed the USPTO to issue guidance to patent examiners and applicants regarding patent subject matter eligibility in order to address innovation in AI and other critical and emerging technologies.  

This week, a federal court in Tennessee transferred to California a lawsuit brought by several large music publishers against a California-based AI company, Anthropic PBC. Plaintiffs in Concord Music Group et al. v. Anthropic PBC[1] allege that Anthropic infringed the music publishers’ copyrights by improperly using copyrighted song lyrics to train Claude, its generative AI model.  The music publishers asserted not only direct copyright infringement based on this training, but also contributory and vicarious infringement based on user-prompted outputs and violation of Section 1202(b) of the Digital Millennium Copyright Act for allegedly removing plaintiffs’ copyright management information from copies of the lyrics.  On November 16, 2023, the music publishers also filed a motion for a preliminary injunction that would require Anthropic to implement effective “guardrails” in its Claude AI models to prevent outputs that infringe plaintiffs’ copyrighted lyrics and preclude Anthropic from creating or using unauthorized copies of those lyrics to train future AI models.