In Punchbowl, Inc. v. AJ Press, Inc., the Ninth Circuit revived a trademark infringement case previously dismissed on grounds that the First Amendment shields “expressive” trademarks from Lanham Act liability unless plaintiff can show the mark (1) has no artistic relevance to the underlying work, or (2) explicitly misleads as to its source.[1]  This is known as the Rogers test, and effectively operates as a shield to trademark liability where it applies.  Last year, the Supreme Court limited application of the Rogers test in Jack Daniel’s Properties, Inc. v. VIP Products LLC, [2] holding that it does not apply where the challenged use of a trademark is to identify the source of the defendant’s goods or services.  In those instances, a traditional likelihood of confusion or dilution analysis is required. 

The following post was originally included as part of our recently published memorandum “Selected Issues for Boards of Directors in 2024”.

Artificial Intelligence (AI), and in particular, generative AI, will continue to be an issue in the year to come, as new laws and regulations, agency guidance, continuing and additional litigation on AI and new AI-related partnerships will prompt headlines and require companies to continually think about these issues.

The following post was originally included as part of our recently published memorandum “Selected Issues for Boards of Directors in 2024”.

Artificial intelligence (AI) was the biggest technology news of 2023. AI continues to revolutionize business in big and small ways, ranging from disrupting entire business models to making basic support functions more efficient. Observers have rightly focused on the plentiful value-creation opportunities this new technology affords. Less attention has been given to the risks AI creates for boards and management teams, which call for sophisticated governance, operational and risk perspectives. This article identifies key areas of risk and offers suggestions for mitigation on the road to realizing the enormous benefits AI promises.

On 15 January 2024, the UK Information Commissioner’s Office (“ICO”)[1] launched a series of public consultations on the applicability of data protection laws to the development and use of generative artificial intelligence (“GenAI”). The ICO is seeking comments from “all stakeholders with an interest in GenAI”, including developers, users, legal advisors and consultants.[2]

This third part of our four-part series on using synthetic data to train AI models explores the interplay between synthetic data training sets, the EU Copyright Directive and the forthcoming EU AI Act.

On November 28, 2023, U.S. District Judge Fred W. Slaughter for the Central District of California granted motions for summary judgment against a screenwriter’s claims that the creation of Ad Astra, the 2019 Brad Pitt film, had infringed on a script he had written.[1]  The Court reasoned that the defendant companies could not have possibly copied the script in question, as they did not have access to the script until after Ad Astra was written.  Additionally, the court stated, the two films were significantly different so as to conclude there was no infringement, even if it could be shown there was access.  

In an opinion issued December 4, 2023, the U.S. Court of Appeals for the Federal Circuit[1] reversed a lower court’s denial of Intel Corporation’s (“Intel’s”) motion for leave to amend its answer to assert a new license defense in a patent infringement suit brought by VLSI Technology LLC (“VLSI”).  The decision paves the way for Intel to make the case that it received a license to VLSI’s patents when a company that Intel had an existing license with became affiliated with VLSI due to its acquisition by an investment management firm.

This second part of our four-part series on using synthetic data to train AI models explores how the use of synthetic data training sets may mitigate copyright infringement risks under EU law.

On 9 December 2023, trilogue negotiations on the EU’s Artificial Intelligence (“AI”) Act reached a key inflection point, with a provisional political agreement reached between the European Parliament and Council.  As we wait for the consolidated legislative text to be finalised and formally approved, below we set out the key points businesses need to know about the political deal and what comes next.

This is the first part of series on using synthetic data to train AI models. See here for Parts 23, and 4.

The recent rapid advancements of Artificial Intelligence (“AI”) have revolutionized creation and learning patterns. Generative AI (“GenAI”) systems have unveiled unprecedented capabilities, pushing the boundaries of what we thought possible. Yet, beneath the surface of the transformative potential of AI lies a complex legal web of intellectual property (“IP”) risks, particularly concerning the use of “real-world” training data, which may lead to alleged infringement of third-party IP rights if AI training data is not appropriately sourced.