On 2 July, the French data protection supervisory authority – Commission Nationale de l’Informatique et des Libertés (CNIL) – launched a new public consultation on the development of AI systems. The public consultation is on (i) a new series of how-to sheets aimed at providing clarifications and recommendations with respect to seven issues related to the development of AI and data protection and (ii) a questionnaire on applying the GDPR to AI models trained with personal data. Below we set out a summary of the main takeaways.

The United States Patent and Trademark Office (“USPTO”) issued guidance on February 13, 2024 (the “Guidance”) regarding the patentability of inventions created or developed with the assistance of artificial intelligence (“AI”), a novel issue on which the USPTO has been seeking input from various public and private stakeholders over the past few years.  President Biden mandated the issuance of such Guidance in his executive order on AI (see our prior alert here)[1] in October 2023.  The Guidance aims to clarify how patent applications involving AI-assisted inventions will be examined by patent examiners, and reaffirms the existing jurisprudence maintaining that only natural persons, not AI tools, can be listed as inventors.  However, the Guidance clarifies that AI-assisted inventions are not automatically ineligible for patent protection so long as one or more natural persons “significantly contributed” to the invention.  Overall, the Guidance underscores the need for a balanced approach to inventorship that acknowledges both technological advancements and human innovation.  The USPTO is seeking public feedback on the Guidance, which is due by May 13, 2024.

Quantum technology is seen as having the potential to revolutionize many aspects of technology, the economy and society, including the financial sector. At the same time, this technology represents a significant threat to cybersecurity, especially due to its potential to render most current encryption schemes obsolete.

The following post was originally included as part of our recently published memorandum “Selected Issues for Boards of Directors in 2024”.

Artificial Intelligence (AI), and in particular, generative AI, will continue to be an issue in the year to come, as new laws and regulations, agency guidance, continuing and additional litigation on AI and new AI-related partnerships will prompt headlines and require companies to continually think about these issues.

On October 30, 2023, the Biden Administration issued a landmark Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the “Order”), directing the establishment of new standards for artificial intelligence (“AI”) safety and security and laying the foundation to ensure the protection of Americans’ privacy and civil rights, support for American workers, promotion of responsible innovation, competition and collaboration, while advancing America’s role as a world leader with respect to AI.

On October 19, 2023, the U.S. Copyright Office announced in the Federal Register that it will consider a proposed exemption to the Digital Millennium Copyright Act’s (“DMCA”) anti-circumvention provisions which prohibit the circumvention of any technological measures used to prevent unauthorized access to copyrighted works.  The exemption would allow those researching bias in artificial intelligence (“AI”) to bypass any technological measures that limit the use of copyrighted generative AI models.

On September 6, 2023, California Governor Gavin Newsom signed Executive Order N-12-23 (the “Executive Order”) relating to the use of generative artificial intelligence (“GenAI”) by the State, as well as preparation of certain reports assessing the equitable use of GenAI in the public sector.  The Executive Order instructs State agencies to look into the potential risks inherent with the use of GenAI and creates a blueprint for public sector implementation of GenAI tools in the near future. The Executive Order indicates that California is anticipating expanding the role that GenAI tools play in aiding State agencies to achieve their missions, while simultaneously ensuring that these State agencies identify and study any negative effects that the implementation of GenAI tools might have on residents of the State.  The Executive Order covers a number of areas, including:

The U.S. District Court for the District of Columbia recently affirmed a decision by the U.S. Copyright Office (“USCO”) in which the USCO denied an application to register a work authored entirely by an artificial intelligence program.  The case, Thaler v. Perlmutter, challenging U.S. copyright law’s human authorship requirement, is the first of its kind in the United States, but will definitely not be the last, as questions regarding the originality and protectability of generative AI (“GenAI”) created works continue to arise.  The court in Thaler focused on the fact that the work at issue had no human authorship, setting a clear rule for one end of the spectrum.  As the court recognized, the more difficult questions that will need to be addressed include how much human input is required to qualify the user as the creator of a work such that it is eligible for copyright protection.

As we continue to see the rapid development of digital technologies, such as artificial intelligence (“AI”) tools, legislators around the world are contemplating how best to regulate these technologies.  In the UK, the Government has adopted a “pro-innovation” agenda, with the aim of making the UK “an attractive destination for R&D projects, manufacturing and investment, and ensuring [the UK] can realise the economic and social benefits of new technologies as quickly as possible.”[1]