On October 19, 2023, the U.S. Copyright Office announced in the Federal Register that it will consider a proposed exemption to the Digital Millennium Copyright Act’s (“DMCA”) anti-circumvention provisions which prohibit the circumvention of any technological measures used to prevent unauthorized access to copyrighted works.  The exemption would allow those researching bias in artificial intelligence (“AI”) to bypass any technological measures that limit the use of copyrighted generative AI models.

A key aspect of the DMCA, which was passed in 1998 to address the intersection of burgeoning advances in technology and copyright law, is its anti-circumvention provision, which prohibits users from “circumvent[ing] a technological measure that effectively controls access” to a copyrighted work[1] (e.g., books, movies, video games, computer software).  The DMCA defines “circumvention” as a user attempting to “descramble a scrambled work, to decrypt an encrypted work, or otherwise to avoid, bypass, remove, deactivate, or impair a technological measure, without the authority of the copyright owner” when there is a technological measure in place that “effectively controls access to a work.”[2]  Thus, any attempt by a user to avoid or bypass technical barriers on use of digital copyrighted material—such as print to PDF for non-downloadable documents—is illegal under the DMCA.

The DMCA recognizes, however, that there may be specific and narrow temporary exceptions in which these anti-circumvention provisions should not apply.  These allow, for example, educators to gain access to certain digital copies of copyrighted works solely in order to determine whether to acquire a copy of that work (and only when an identical copy of that work is not reasonably available in another form).  As part of its triennial public rulemaking process, the Librarian of Congress is supposed to consider whether there are any classes of copyrighted works for which users are, or are likely to be in the succeeding three-year period, adversely affected by the anti-circumvention prohibition.[3]  Now conducting the ninth such required rulemaking since the passage of the DMCA, the U.S. Copyright Office is currently considering whether to renew many of the existing exemptions, as well as considering several new proposed exemptions, including the one for AI research.

The proposed AI exemption would allow researchers to bypass technological prevention measures that control access to generative AI models, solely to examine biases in such models.  The exemption was proposed by Jonathan Weiss of Chinnu, Inc., an information technology security and consulting company, in an effort to promote fairness, accountability and openness in the AI industry and for its consumers.  As Weiss wrote in his petition, “[i]n an era where AI-driven decisions increasingly impact our daily lives, ensuring these decisions are fair and unbiased is not merely a technical necessity but a societal imperative. By granting this exemption, we can promote responsible AI research, ensuring a more equitable and secure future for all.”[4]  

The exception would allow for sharing the research, techniques and methodologies that “expose and address” biases in AI training with the goal of ensuring transparency within AI models and their future development.  Weiss’ petition contains three “guardrails” to prevent misuse of the exemption:

  • the exemption  applies only where the “primary  intention is to identify and address  biases, and not to exploit them;”
  • any research “prioritize[s] data privacy, ensuring that no personal or sensitive data is compromised;”
  • and researchers should “actively engage with AI developers and stakeholders to address discovered biases.”[5]

As the Copyright Office notes, Weiss’ petition does not specify who qualifies as a “researcher” for the purposes of the exemption, nor does he outline what types of measures currently prohibit such researchers from accessing the software within generative AI models for the study of bias.  Additionally, assuming generative AI-related bias primarily originates in the training data and parameters derived from it, it is not clear that circumventing copyright protection measures (as opposed to an obligation on AI developers to disclose training materials not accessible to users, regardless of protection measures) would help achieve the stated goal.

The Copyright Office is therefore seeking comments on whether the proposed rule should be adopted. In order to make its determination, the Copyright Office seeking specific information regarding the relevant technological protection measures that may be circumvented with this exemption, and whether the measures are currently adversely affecting non-infringing uses.  The Copyright Office also seeks information regarding whether eligible users may access AI models and their software through channels that do not require circumvention.

Initial comments of support (or neutral comments providing evidentiary support) for the proposal are due December 22, 2023, with comments opposing adoption due February 20, 2024.  Reply comments will then be due March 19, 2024. Though the proposed exemption would, if adopted, allow researchers access to copyrighted materials despite technical preventative measures in place, its intention aligns with a cornerstone principle of the DMCA—promoting fairness and protecting creators while adapting to technological change in an increasingly digital society.

[1] 17 U.S.C Sec. 1201(a)(1)(A).

[2] 17 U.S.C Sec. 1201(a)(3).

[3] 17 U.S.C Sec. 1201(a)(1)(C).

[4] 72013 Fed. Reg. 88.1 (Oct. 19, 2023); the text of Weiss’ petition can be accessed here.

[5] Id.