The following is part of our annual publication Selected Issues for Boards of Directors in 2026. Explore all topics or download the PDF.


AI adoption is now mainstream: 88% of businesses use AI in at least one function, with global spending expected to exceed $1.5 trillion in 2025 and approach $2 trillion in 2026. As organizations race to scale AI, many have relied upon traditional vendor risk management policies to vet third-party AI vendors and tools; however, implementation of third-party AI tools presents distinctive risks that require tailored due diligence, auditing, contracting and governance. Because businesses are accountable for outputs generated by third-party AI tools and for vendors’ processing of prompts and other business data, boards and management should ensure legal, IT and procurement teams apply a principled, risk-based approach to vendor management that addresses AI‑specific considerations.

As states continue to grapple with establishing regulatory frameworks for the most powerful artificial intelligence (“AI”) systems, New York has joined California in targeting frontier AI models with the Responsible AI Safety and Education Act (the “RAISE Act” or the “Act”).[1] Signed into law on December 19, 2025 by Governor Hochul, the Act creates a comprehensive regulatory framework for developers of the most advanced AI systems, marking New York’s entry into the vanguard of state AI safety regulation.

This article was authored by Daniel Ilan, Rahul Mukhi, Prudence Buckland, and Melissa Faragasso from Cleary Gottlieb, and Brian Lichter and Elijah Seymour from Stroz Friedberg, a LevelBlue company.

Recent disclosures by Anthropic and OpenAI highlight a pivotal shift in the cyber threat landscape: AI is no longer merely a tool that aids attackers, in some cases, it has become the attacker itself. Together, these incidents illustrate immediate implications for corporate governance, contracting and security programs as companies integrate AI with their business systems. Below, we explain how these attacks were orchestrated and what steps businesses should consider given the rising cyber risks associated with the adoption of AI.

As of July 8, the U.S. Department of Justice (“DOJ”) is scheduled to begin full enforcement of its Data Security Program (“DSP”) and the recently issued Bulk Data Rule after its 90-day limited enforcement policy expires, ushering in “full compliance” requirements for U.S. companies and individuals.[1] 

On 3 February 2025, the European Commission (“EC”) published an updated version of its frequently asked questions (“FAQs”) on the EU Data Act.[1]  The Data Act, which is intended to make data more accessible to users of IoT devices in the EU, entered into force on 11 January 2024 and will become generally applicable as of 12 September 2025.

On 15 January 2024, the UK Information Commissioner’s Office (“ICO”)[1] launched a series of public consultations on the applicability of data protection laws to the development and use of generative artificial intelligence (“GenAI”). The ICO is seeking comments from “all stakeholders with an interest in GenAI”, including developers, users, legal advisors and consultants.[2]

On 15 March 2023, the UK ICO published an update to its Guidance on AI and Data Protection (the “Guidance”), following requests from the UK industry to clarify requirements for fairness in artificial intelligence (“AI”).  The Guidance contains advice on the interpretation of relevant data protection law as it applies to AI, and recommendations on good practice for organisational and technical measures to mitigate risks caused by AI.