This week saw yet another California federal court dismiss copyright and related claims arising out of the training and output of a generative AI model in Tremblay v. OpenAI, Inc.,[1]a putative class action filed on behalf of a group of authors alleging that OpenAI infringed their copyrighted literary works by using them to train ChatGPT.[2]  OpenAI moved to dismiss all claims against it, save the claim for direct copyright infringement, and the court largely sided with OpenAI. 

The rapid development of AI is introducing new opportunities and challenges to dispute resolution. AI is already impacting the document review and production process, legal research, and the drafting of court submissions. It is expected that the use of AI will expand into other areas, including predicting case outcomes and adjudicating disputes. However, the use of AI in litigation also bears risk, as highlighted by a recent First-tier Tribunal (Tax) decision, where an appellant had sought to rely on precedent authorities that, in fact, were fabricated by AI (a known risk with AI using large language models, referred to as hallucination).[1] While, in this particular case, no further consequences seemed to follow (in light of the fact that the appellant, a litigant in person, “had been unaware that the AI cases were not genuine and that she did not know how to check their validity[2]), the Tribunal did highlight that “providing authorities which are not genuine and asking a court or tribunal to rely on them is a serious and important issue”,[3] suggesting that litigants may incur certain risks by relying on authorities suggested by AI, unless these are independently verified. On 12 December 2023, a group of senior judges, including the Master of the Rolls and the Lady Chief Justice, issued guidance on AI for judicial office holders, which, amongst other things, discourages the use of AI for legal research and analysis and highlights the risk of AI being relied on by litigants to provide legal advice and/or to produce evidence.[4]

Quantum technology is seen as having the potential to revolutionize many aspects of technology, the economy and society, including the financial sector. At the same time, this technology represents a significant threat to cybersecurity, especially due to its potential to render most current encryption schemes obsolete.

In Punchbowl, Inc. v. AJ Press, Inc., the Ninth Circuit revived a trademark infringement case previously dismissed on grounds that the First Amendment shields “expressive” trademarks from Lanham Act liability unless plaintiff can show the mark (1) has no artistic relevance to the underlying work, or (2) explicitly misleads as to its source.[1]  This is known as the Rogers test, and effectively operates as a shield to trademark liability where it applies.  Last year, the Supreme Court limited application of the Rogers test in Jack Daniel’s Properties, Inc. v. VIP Products LLC, [2] holding that it does not apply where the challenged use of a trademark is to identify the source of the defendant’s goods or services.  In those instances, a traditional likelihood of confusion or dilution analysis is required. 

The following post was originally included as part of our recently published memorandum “Selected Issues for Boards of Directors in 2024”.

Artificial Intelligence (AI), and in particular, generative AI, will continue to be an issue in the year to come, as new laws and regulations, agency guidance, continuing and additional litigation on AI and new AI-related partnerships will prompt headlines and require companies to continually think about these issues.

The following post was originally included as part of our recently published memorandum “Selected Issues for Boards of Directors in 2024”.

Artificial intelligence (AI) was the biggest technology news of 2023. AI continues to revolutionize business in big and small ways, ranging from disrupting entire business models to making basic support functions more efficient. Observers have rightly focused on the plentiful value-creation opportunities this new technology affords. Less attention has been given to the risks AI creates for boards and management teams, which call for sophisticated governance, operational and risk perspectives. This article identifies key areas of risk and offers suggestions for mitigation on the road to realizing the enormous benefits AI promises.

On 15 January 2024, the UK Information Commissioner’s Office (“ICO”)[1] launched a series of public consultations on the applicability of data protection laws to the development and use of generative artificial intelligence (“GenAI”). The ICO is seeking comments from “all stakeholders with an interest in GenAI”, including developers, users, legal advisors and consultants.[2]

This third part of our four-part series on using synthetic data to train AI models explores the interplay between synthetic data training sets, the EU Copyright Directive and the forthcoming EU AI Act.

On November 28, 2023, U.S. District Judge Fred W. Slaughter for the Central District of California granted motions for summary judgment against a screenwriter’s claims that the creation of Ad Astra, the 2019 Brad Pitt film, had infringed on a script he had written.[1]  The Court reasoned that the defendant companies could not have possibly copied the script in question, as they did not have access to the script until after Ad Astra was written.  Additionally, the court stated, the two films were significantly different so as to conclude there was no infringement, even if it could be shown there was access.