As inventors, attorneys and patent examiners grapple with the impacts of AI on patents, the United States Patent and Trademark Office (the “USPTO”) has released guidance concerning the subject matter patent eligibility of inventions that relate to AI technology.[1] The impetus for this guidance was President Biden’s Executive Order on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence, which directed the USPTO to issue guidance to patent examiners and applicants regarding patent subject matter eligibility in order to address innovation in AI and other critical and emerging technologies.
Intellectual Property
UPSTO Issues “Significant” Guidance on Patentability of AI-Assisted Inventions, but unlike USCO, Does Not Require Disclosure of AI Involvement
The United States Patent and Trademark Office (“USPTO”) issued guidance on February 13, 2024 (the “Guidance”) regarding the patentability of inventions created or developed with the assistance of artificial intelligence (“AI”), a novel issue on which the USPTO has been seeking input from various public and private stakeholders over the past few years. President Biden mandated the issuance of such Guidance in his executive order on AI (see our prior alert here)[1] in October 2023. The Guidance aims to clarify how patent applications involving AI-assisted inventions will be examined by patent examiners, and reaffirms the existing jurisprudence maintaining that only natural persons, not AI tools, can be listed as inventors. However, the Guidance clarifies that AI-assisted inventions are not automatically ineligible for patent protection so long as one or more natural persons “significantly contributed” to the invention. Overall, the Guidance underscores the need for a balanced approach to inventorship that acknowledges both technological advancements and human innovation. The USPTO is seeking public feedback on the Guidance, which is due by May 13, 2024.
Training AI models on Synthetic Data: No silver bullet for IP infringement risk in the context of training AI systems (Part 4 of 4)
Citing Jack Daniel’s, the Ninth Circuit Reverses Itself and Clarifies the Test for Expressive Trademarks
In Punchbowl, Inc. v. AJ Press, Inc., the Ninth Circuit revived a trademark infringement case previously dismissed on grounds that the First Amendment shields “expressive” trademarks from Lanham Act liability unless plaintiff can show the mark (1) has no artistic relevance to the underlying work, or (2) explicitly misleads as to its source.[1] This is known as the Rogers test, and effectively operates as a shield to trademark liability where it applies. Last year, the Supreme Court limited application of the Rogers test in Jack Daniel’s Properties, Inc. v. VIP Products LLC, [2] holding that it does not apply where the challenged use of a trademark is to identify the source of the defendant’s goods or services. In those instances, a traditional likelihood of confusion or dilution analysis is required.
California District Court Rejects Infringement Claim Brought Over 2019 Film Ad Astra
On November 28, 2023, U.S. District Judge Fred W. Slaughter for the Central District of California granted motions for summary judgment against a screenwriter’s claims that the creation of Ad Astra, the 2019 Brad Pitt film, had infringed on a script he had written.[1] The Court reasoned that the defendant companies could not have possibly copied the script in question, as they did not have access to the script until after Ad Astra was written. Additionally, the court stated, the two films were significantly different so as to conclude there was no infringement, even if it could be shown there was access.
Federal Circuit Ruling Underscores the “Future Affiliate” Trap in Licensing Agreements
In an opinion issued December 4, 2023, the U.S. Court of Appeals for the Federal Circuit[1] reversed a lower court’s denial of Intel Corporation’s (“Intel’s”) motion for leave to amend its answer to assert a new license defense in a patent infringement suit brought by VLSI Technology LLC (“VLSI”). The decision paves the way for Intel to make the case that it received a license to VLSI’s patents when a company that Intel had an existing license with became affiliated with VLSI due to its acquisition by an investment management firm.
Training AI models on Synthetic Data: No silver bullet for IP infringement risk in the context of training AI systems (Part 2 of 4)
This second part of our four-part series on using synthetic data to train AI models explores how the use of synthetic data training sets may mitigate copyright infringement risks under EU law.
Training AI models on Synthetic Data: No silver bullet for IP infringement risk in the context of training AI systems (Part 1 of 4)
This is the first part of series on using synthetic data to train AI models. See here for Parts 2, 3, and 4.
The recent rapid advancements of Artificial Intelligence (“AI”) have revolutionized creation and learning patterns. Generative AI (“GenAI”) systems have unveiled unprecedented capabilities, pushing the boundaries of what we thought possible. Yet, beneath the surface of the transformative potential of AI lies a complex legal web of intellectual property (“IP”) risks, particularly concerning the use of “real-world” training data, which may lead to alleged infringement of third-party IP rights if AI training data is not appropriately sourced.
Copyright Office Considers New DMCA Carveout for AI Anti-Bias Research
On October 19, 2023, the U.S. Copyright Office announced in the Federal Register that it will consider a proposed exemption to the Digital Millennium Copyright Act’s (“DMCA”) anti-circumvention provisions which prohibit the circumvention of any technological measures used to prevent unauthorized access to copyrighted works. The exemption would allow those researching bias in artificial intelligence (“AI”) to bypass any technological measures that limit the use of copyrighted generative AI models.
G7 Leaders Publish AI Code of Conduct: A Common Thread in the Patchwork of Emerging AI Regulations Globally?
On October 30, 2023, the G7 Leaders published a Statement on the Hiroshima Artificial Intelligence (“AI”) Process (the “Statement”).[1] This follows the G7 Summit in May, where the leaders agreed on the need to address the risks arising from rapidly evolving AI technologies. The Statement was accompanied by the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems (the “Code of Conduct”)[2] and the Hiroshima Process International Guiding Principles for Advanced AI Systems (the “Guiding Principles”).[3]