Fair Use Week 2024: Day Five With Guest Expert Katherine Klosek

We are wrapping up the 11th Annual Fair Use Week with a write up from Katherine Klosek, Director of Information Policy and Federal Relations at the Association of Research Libraries (ARL). Katherine reviews the developing legislative proposals attempting to mitigate “AI harm,” pointing out that the legislation might actually harm fair use and other copyright exceptions in “No Frauds, No Fakes…No Fair Use?.”  – Kyle K. Courtney

No Frauds, No Fakes…No Fair Use?

by Katherine Klosek

Artificial intelligence has the potential to serve as a powerful tool for libraries and other memory institutions. During a recent hearing of the Senate Rules Committee, Librarian of Congress Dr. Carla Hayden described how the Library of Congress is experimenting with machine learning models to synthesize book descriptions for blind, visually impaired, and reading disabled communities. Meroe Park, Deputy Secretary and Chief Operating Officer at the Smithsonian Institution, talked about Smithsonian Data Science Lab’s work to develop a new AI model that can discover and correct instances in their collections in which women’s contributions were mistakenly attributed to men. Academic libraries also support non-generative research methodologies like text and data mining to better analyze creative works by women, gender minorities, and artists of color to explore important questions about culture and society. This technology offers untold opportunities to advance research and preserve cultural heritage.

While the promise of AI is apparent, Congress is focused on its potential harms— introducing legislation to prevent potential harms like the fraudulent use of a person’s name, image, or likeness to create deepfakes. Like the prevention of mis- and disinformation broadly, preventing deceptive uses of a person’s digital likeness is a laudable goal. Unfortunately, some current legislative proposals would actually limit free speech in the process. This piece discusses why any legislation targeting deepfakes must include provisions to allow for free expression. The exceptions contained in the Copyright Act provide a useful model for such provisions.

Fair use is a First Amendment safeguard

Fair use is a First Amendment accommodation that is baked into copyright law, allowing the public to use expressive works without a copyright holder’s permission. Without fair use, the exclusive rights to reproduce, perform, transmit, distribute, display, and create derivative works would entrench a copyright monopoly in which copyright holders are the only ones who can use expressive works unless they grant permission to do so. As Public Knowledge wrote after a recent hearing on AI and right of publicity:

“Intellectual property regimes are themselves a form of speech regulation. By granting a monopoly on a specific kind of speech–in this case, speech that uses the audiovisual likeness of an individual–a publicity right bars all other individuals from engaging in that particular subset of speech. This means, for good or ill, that any IP regime must contain significant exceptions and limitations to accommodate the First Amendment. Copyright is only constitutionally sound because of a robust and flexible fair use doctrine; any federal publicity right would have to similarly allow for a broad range of speech.”

Recent proposals to regulate the use of an AI-generated likeness would create a new broad intellectual property regime, but without necessary accommodations for free speech like the robust, explicit, and flexible exceptions and limitations in the US Copyright Act.

Robust exceptions are needed in right of publicity legislation

The No Artificial Intelligence Fake Replicas and Unauthorized Duplications Act, or the “No AI FRAUD Act” (H.R. 6943) would create a federal intellectual property right that would outlive the individual whose digital likeness and voice is protected. Anyone who publishes, performs, distributes, transmits, or otherwise makes available to the public a digital voice replica, a digital depiction, or a personalized cloning service, with the knowledge that it was not authorized by the rightsholder, would be in violation of the law. The bill effectively allows rightsholders to control a broad swath of digital content without any safety valves allowing for the exercise of free expression, such as an exception for research or scholarly use.

The No AI FRAUD Act tries to get around its free expression problem with a clause holding that the First Amendment serves as a defense. But the bill immediately limits the First Amendment defense by requiring courts to balance “the public interest in access to the use” against “the intellectual property interest in the voice or likeness.” As Jennifer Rothman points out, this test is likely unconstitutionally vague.

The Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2023, or the “NO FAKES Act of 2023” in the Senate similarly creates liability for users of digital replicas. NO FAKES explicitly carves out digital replicas that are used in documentaries or docudramas, or for purposes of comment, criticism, scholarship, satire, or parody, from violating the law. This prescriptive approach offers certainty about the uses listed, but without the flexibility that a fair use analysis requires.

The flexibility and specificity of fair use support the progress of knowledge

The exceptions in the Copyright Act provide users with both certainty and flexibility. On the one hand, the exceptions set forth in section 108-121A set forth a variety of specific circumstances under which a work may be used without the copyright owner’s authorization. At the same time, fair use in section 107 allows uses in accordance with a dynamic evaluation of factors, including whether a new use adds something to the purpose or character of a work. Decades of fair use litigation have clarified additional uses; for instance, while parody is not named in the Copyright Act, courts established that parody can qualify as a fair use in the case Campbell v. Acuff Rose Music, Inc. The flexible, open ended nature of fair use means it can adapt to new technologies; in Authors Guild v. HathiTrust, the US Court of Appeals for the Second Circuit found that making digital copies of works to create a full-search database and provide access for the print disabled is a fair use.

To avoid crafting a federal right of publicity in a way that stifles free expression and violates the First Amendment, any legislation should include clear exceptions as well as an open-ended framework to evaluate uses. Otherwise, the uncertainty of when a person may lawfully use a digital depiction may chill even nonprofit educational uses.

Congress should engage stakeholders like librarians, researchers, scholars, authors to understand the full implications of any proposed legislation.

A federal publicity right might be a good idea to protect people from non-consensual uses of their likeness. But as discussed above, without the full and robust protections of fair use and the First Amendment, an intellectual property right is likely to stifle speech and censor expression. Congress should invite a broad range of stakeholders to hearings to determine how to develop targeted legislation to address deepfakes, rather than taking broad aim at the flow of information online.

Katherine Klosek is Director of Information Policy and Federal Relations at the Association of Research Libraries (ARL). Klosek leads public policy strategy and outreach for the Association’s information policy priorities, including balanced copyright, online speech, open internet, user data privacy, and accessibility policy. Serving as the staff lead to ARL’s Advocacy and Public Policy Committee, Katherine mobilizes ARL’s membership to influence, respond, and adapt to major legal and policy developments in legislative, administrative, and judicial forums. Prior to joining ARL in December 2020, Katherine served as director of the Center for Applied Public Research at Johns Hopkins University (JHU), where she was instrumental in the center’s launch.