Last week, a divided Supreme Court held in Warner Chappell Music, Inc. et al. v. Nealy et al. that a copyright plaintiff who timely files an infringement lawsuit based on the “discovery rule” may recover damages for infringements that occurred outside the Copyright Act’s three-year statute of limitations period.[1]  A claim generally accrues when an infringing act occurs, but many circuits apply a “discovery rule,” pursuant to which a claim accrues when a plaintiff has (or with reasonable diligence should have) discovered the infringement, which could be many years later.  Courts applying this rule have recently disagreed on how far back damages are available, with the Second Circuit holding that a copyright claimant may recover only three years’ of damages, even if the suit was otherwise timely under the discovery rule.  The Supreme Court rejected that conclusion, holding that “no such limit on damages exists” in the Copyright Act, which “entitles a copyright owner to recover damages for any timely claim” no matter when the infringement occurred.  

Last week the Fourth Circuit reversed a $1 billion copyright verdict against an internet service provider and ordered a new trial on damages allegedly arising from illegal music downloads by its subscribers.  In Sony Music Entertainment et al. v. Cox Communications Inc. et al.,[1] a group of music producers belonging to the Recording Industry Association of America brought suit against Cox for contributory and vicarious copyright infringement based on allegations that Cox induced and encouraged rampant infringement on its service.  In 2019, a jury found Cox liable on both theories for infringement of 10,017 copyrighted works and awarded $99,830.29 per work, for a total of $1 billion in statutory damages.  On appeal, the Fourth Circuit issued a mixed ruling – upholding the finding of contributory infringement but reversing the vicarious liability verdict and remanding for a new trial on damages. 

The United States Patent and Trademark Office (“USPTO”) issued guidance on February 13, 2024 (the “Guidance”) regarding the patentability of inventions created or developed with the assistance of artificial intelligence (“AI”), a novel issue on which the USPTO has been seeking input from various public and private stakeholders over the past few years.  President Biden mandated the issuance of such Guidance in his executive order on AI (see our prior alert here)[1] in October 2023.  The Guidance aims to clarify how patent applications involving AI-assisted inventions will be examined by patent examiners, and reaffirms the existing jurisprudence maintaining that only natural persons, not AI tools, can be listed as inventors.  However, the Guidance clarifies that AI-assisted inventions are not automatically ineligible for patent protection so long as one or more natural persons “significantly contributed” to the invention.  Overall, the Guidance underscores the need for a balanced approach to inventorship that acknowledges both technological advancements and human innovation.  The USPTO is seeking public feedback on the Guidance, which is due by May 13, 2024.

This week saw yet another California federal court dismiss copyright and related claims arising out of the training and output of a generative AI model in Tremblay v. OpenAI, Inc.,[1]a putative class action filed on behalf of a group of authors alleging that OpenAI infringed their copyrighted literary works by using them to train ChatGPT.[2]  OpenAI moved to dismiss all claims against it, save the claim for direct copyright infringement, and the court largely sided with OpenAI. 

The rapid development of AI is introducing new opportunities and challenges to dispute resolution. AI is already impacting the document review and production process, legal research, and the drafting of court submissions. It is expected that the use of AI will expand into other areas, including predicting case outcomes and adjudicating disputes. However, the use of AI in litigation also bears risk, as highlighted by a recent First-tier Tribunal (Tax) decision, where an appellant had sought to rely on precedent authorities that, in fact, were fabricated by AI (a known risk with AI using large language models, referred to as hallucination).[1] While, in this particular case, no further consequences seemed to follow (in light of the fact that the appellant, a litigant in person, “had been unaware that the AI cases were not genuine and that she did not know how to check their validity[2]), the Tribunal did highlight that “providing authorities which are not genuine and asking a court or tribunal to rely on them is a serious and important issue”,[3] suggesting that litigants may incur certain risks by relying on authorities suggested by AI, unless these are independently verified. On 12 December 2023, a group of senior judges, including the Master of the Rolls and the Lady Chief Justice, issued guidance on AI for judicial office holders, which, amongst other things, discourages the use of AI for legal research and analysis and highlights the risk of AI being relied on by litigants to provide legal advice and/or to produce evidence.[4]

Quantum technology is seen as having the potential to revolutionize many aspects of technology, the economy and society, including the financial sector. At the same time, this technology represents a significant threat to cybersecurity, especially due to its potential to render most current encryption schemes obsolete.

In Punchbowl, Inc. v. AJ Press, Inc., the Ninth Circuit revived a trademark infringement case previously dismissed on grounds that the First Amendment shields “expressive” trademarks from Lanham Act liability unless plaintiff can show the mark (1) has no artistic relevance to the underlying work, or (2) explicitly misleads as to its source.[1]  This is known as the Rogers test, and effectively operates as a shield to trademark liability where it applies.  Last year, the Supreme Court limited application of the Rogers test in Jack Daniel’s Properties, Inc. v. VIP Products LLC, [2] holding that it does not apply where the challenged use of a trademark is to identify the source of the defendant’s goods or services.  In those instances, a traditional likelihood of confusion or dilution analysis is required. 

The following post was originally included as part of our recently published memorandum “Selected Issues for Boards of Directors in 2024”.

Artificial Intelligence (AI), and in particular, generative AI, will continue to be an issue in the year to come, as new laws and regulations, agency guidance, continuing and additional litigation on AI and new AI-related partnerships will prompt headlines and require companies to continually think about these issues.

The following post was originally included as part of our recently published memorandum “Selected Issues for Boards of Directors in 2024”.

Artificial intelligence (AI) was the biggest technology news of 2023. AI continues to revolutionize business in big and small ways, ranging from disrupting entire business models to making basic support functions more efficient. Observers have rightly focused on the plentiful value-creation opportunities this new technology affords. Less attention has been given to the risks AI creates for boards and management teams, which call for sophisticated governance, operational and risk perspectives. This article identifies key areas of risk and offers suggestions for mitigation on the road to realizing the enormous benefits AI promises.