Photo of Daniel Ilan

Daniel Ilan’s practice focuses on intellectual property law.

The following is part of our annual publication Selected Issues for Boards of Directors in 2026. Explore all topics or download the PDF.


AI adoption is now mainstream: 88% of businesses use AI in at least one function, with global spending expected to exceed $1.5 trillion in 2025 and approach $2 trillion in 2026. As organizations race to scale AI, many have relied upon traditional vendor risk management policies to vet third-party AI vendors and tools; however, implementation of third-party AI tools presents distinctive risks that require tailored due diligence, auditing, contracting and governance. Because businesses are accountable for outputs generated by third-party AI tools and for vendors’ processing of prompts and other business data, boards and management should ensure legal, IT and procurement teams apply a principled, risk-based approach to vendor management that addresses AI‑specific considerations.

This article was authored by Daniel Ilan, Rahul Mukhi, Prudence Buckland, and Melissa Faragasso from Cleary Gottlieb, and Brian Lichter and Elijah Seymour from Stroz Friedberg, a LevelBlue company.

Recent disclosures by Anthropic and OpenAI highlight a pivotal shift in the cyber threat landscape: AI is no longer merely a tool that aids attackers, in some cases, it has become the attacker itself. Together, these incidents illustrate immediate implications for corporate governance, contracting and security programs as companies integrate AI with their business systems. Below, we explain how these attacks were orchestrated and what steps businesses should consider given the rising cyber risks associated with the adoption of AI.

On September 29, 2025, Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act (TFAIA, SB 53 or the Act)[1], establishing a comprehensive framework for transparency, safety and accountability in the development and deployment of the most advanced artificial intelligence models. Building upon existing California laws targeting AI such as AB 2013[2], the Act, which takes effect January 1, 2026 and imposes penalties up to $1 million per violation, creates immediate compliance obligations for AI developers of the most powerful frontier models.

The following is part of our annual publication Selected Issues for Boards of Directors in 2025Explore all topics or download the PDF.


Deployment of generative AI expanded rapidly across many industries in 2024, leading to broadly increased productivity, return on investment and other benefits. At the same time, AI was also a focus for lawmakers, regulators and courts. There are currently 27 active generative AI litigation cases in the U.S., nearly all of which involve copyright claims. Numerous state legislatures have mulled AI regulation, and Colorado became the first and only state thus far to pass a law creating a broad set of obligations for certain developers and deployers of AI.

On July 26th, the National Institute of Standards and Technology (“NIST”) released its Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (the “Profile”),[1] laying out more than 200 suggested actions to mitigate the risks of generative artificial intelligence (“Gen AI”).  This Profile is a companion to NIST’s Artificial Intelligence Risk Management Framework (the “Framework”), which was released in January of 2023.[2]  The Framework aims to act as a resource for entities dealing with all manner of Gen AI systems to help them manage risks and promote trustworthy and responsible development of AI.  The Profile is intended to be an implementation of the Framework, providing concrete steps to manage AI risks.  

As inventors, attorneys and patent examiners grapple with the impacts of AI on patents, the United States Patent and Trademark Office (the “USPTO”) has released guidance concerning the subject matter patent eligibility of inventions that relate to AI technology.[1]  The impetus for this guidance was President Biden’s Executive Order on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence, which directed the USPTO to issue guidance to patent examiners and applicants regarding patent subject matter eligibility in order to address innovation in AI and other critical and emerging technologies.  

Yesterday, the Supreme Court denied certiorari in Hearst Newspapers, LLC v. Martinelli, declining to determine whether the “discovery rule” applies in Copyright Act infringement cases and under what circumstances.  As a result, most circuits will continue to apply the rule to determine when an infringement claim accrues for purposes of applying the Copyright Act’s three-year statute of limitations.

The following post was originally included as part of our recently published memorandum “Selected Issues for Boards of Directors in 2024”.

Artificial Intelligence (AI), and in particular, generative AI, will continue to be an issue in the year to come, as new laws and regulations, agency guidance, continuing and additional litigation on AI and new AI-related partnerships will prompt headlines and require companies to continually think about these issues.

The following post was originally included as part of our recently published memorandum “Selected Issues for Boards of Directors in 2024”.

Artificial intelligence (AI) was the biggest technology news of 2023. AI continues to revolutionize business in big and small ways, ranging from disrupting entire business models to making basic support functions more efficient. Observers have rightly focused on the plentiful value-creation opportunities this new technology affords. Less attention has been given to the risks AI creates for boards and management teams, which call for sophisticated governance, operational and risk perspectives. This article identifies key areas of risk and offers suggestions for mitigation on the road to realizing the enormous benefits AI promises.

In an opinion issued December 4, 2023, the U.S. Court of Appeals for the Federal Circuit[1] reversed a lower court’s denial of Intel Corporation’s (“Intel’s”) motion for leave to amend its answer to assert a new license defense in a patent infringement suit brought by VLSI Technology LLC (“VLSI”).  The decision paves the way for Intel to make the case that it received a license to VLSI’s patents when a company that Intel had an existing license with became affiliated with VLSI due to its acquisition by an investment management firm.