There has been a push at the state and federal level to regulate AI-generated deepfakes that use the voices and likenesses of real people without their approval.  This legislative momentum stems from a series of high profile incidents involving deepfakes that garnered public attention and concern.  Last year, an AI-generated song entitled “Heart on My Sleeve” simulated the voices of recording artists Drake and The Weeknd.  The song briefly went viral before being pulled from streaming services following objections from the artists’ music label.  Another incident involved an advertisement for dental services that used an AI-generated Tom Hanks to make the sales pitch.  As AI becomes more sophisticated and accessible to the general public, it has raised concerns over the misappropriation of people’s personas.  In recent months, several states have introduced legislation targeting the use of deepfakes to spread election-related misinformation.  At the federal level, both the House and Senate are considering a federal right of publicity that would give individuals a private right of action.  At the state level, Tennessee has become the first state update its right of publicity laws targeted towards the music industry, signing the Ensuring Likeness Voice and Image Security (the “ELVIS Act”) into law on March 21, 2024, which takes effect July 1, 2024.


The ELVIS Act (HB2091) replaces Tennessee’s “Personal Rights Protection Act of 1984.”  It provides individuals with a property right on use of that individual’s name, photograph and likeness, and creates new protections against unauthorized use of an individual’s voice.  It further expands protections to cover not only commercial use but also non-commercial use. This has generated support from Nashville’s music industry. 

The ELVIS Act covers a voice that is readily identifiable and attributable to a particular individual, regardless of whether the sound contains the actual voice or a simulation of the voice of the individual,  providing protection against the use of unauthorized AI-generated voices.  The rights extend post-mortem for ten years, and may be terminated after an additional two years of non-use. 

Violations of the ELVIS Act impose potential civil and criminal liability.  Any person knowingly in breach of the ELVIS Act, using or infringing upon “an individual’s name, photograph, voice or likeness,” for commercial purposes without consent is liable to a civil action.  More generally, a person is also liable to a civil action if the person knowingly and without authorization “makes available to the public an individual’s voice or likeness,” extending protections to non-commercial uses as well.  Distributing, transmitting or otherwise making available an algorithm or other technology, with knowledge that the person did not have permission to do so, brings additional civil liability.  In addition, violations of the ELVIS Act are also a Class A misdemeanor in Tennessee.  The ELVIS Act also expands right of publicity enforcement rights to include when “a person has entered into a contract for an individual’s exclusive personal services as a recording artist or an exclusive license to distribute sound recordings that capture an individual’s audio performances.”  However, the ELVIS Act creates exemptions through expanding fair use defenses.  Existing fair use defenses covering the use of a “name, photograph, or likeness in connection with any news, public affairs, or sports broadcast or account,” have been expanded to include voices.  Further, the ELVIS Act clarifies that it provides fair use protections to the extent of the First Amendment, which includes use of a name, photograph, voice or likeness in connection with: (1) news, public affairs, or sports broadcasts, (2) comments, criticism, scholarship, satire, or parody, (3) representation of the individual’s self in an audiovisual work (with certain exemptions), (4) fleeting or incidental use, or (5) in an advertisement or commercial announcement.


At the Federal level, bipartisan members of the House and Senate have proposed bills targeting the unauthorized use of deepfakes.  Both proposals treat the right in one’s image, voice, and likeness as a property right that is licensable and survives post-mortem.  Additionally, both bills contain express statements of non-preemption, meaning that state laws providing greater protection would still apply. 

House No AI Fraud Act (HR 6943)

In the House, Representative Maria Elvira Salazar, joined by several bipartisan co-sponsors, introduced the No AI FRAUD Act (HR 6943) in January.  The bill cites several examples of misappropriation of individuals’ voices and likenesses ranging from the manipulation of the voices of famous singers to nonconsensual intimate images of high school students.  The bill provides that individuals have an intellectual property right in their own likeness and voice that is freely transferable and descendible, either in whole or in part. The right survives for at least 10 years post mortem.[1]  An agreement that authorizes a digital depiction of an individual’s likeness or a digital voice replica for use in an advertisement or expressive work is only valid if the individual was represented by counsel and was at least 18 years of age when the agreement was made (or if the individual was a minor, the agreement must be approved by a court) or if the terms of the agreement are governed by a collective bargaining agreement.  This parallels the Senate’s proposal, as further discussed below.

The bill targets unauthorized simulation of voices and likenesses, imposing liability for (1) distributing, transmitting, or making available a “personalized cloning service[2]” that is unauthorized or (2) publishing, performing, distributing, transmitting, or making available an unauthorized digital voice replica or digital depiction, with knowledge that it was not authorized.  Anyone who materially contributes to, directs, or otherwise facilitates such activity with knowledge that the rightsholder did not consent may also be liable.  Penalties for violation of the Act are the greater of damages of $50,000 (for making available a personalized cloning service) or $5,000 (for making available a digital voice replica or digital depiction) per violation or actual damages suffered by the injured party, plus profits from the unauthorized use that are not accounted for in the actual damages.  Like the Senate proposal, punitive damages and reasonable attorneys’ fees may also be awarded.  As with the Senate proposal, civil suits may be brought by the affected individual or by assignees or exclusive licensees of the individual’s rights, as well as by record labels on behalf of musicians with whom they have exclusive agreements.  There is a four year statute of limitations.

The use of a disclaimer that the digital depiction, digital voice replica, or personalized cloning service was unauthorized is not a defense to liability under the bill.  However, like its Senate counterpart, the bill provides for a First Amendment defense, which requires balancing the intellectual property interest against the public interest in access to the use.  Factors to be considered appear to draw from existing copyright and trademark law, and include (1) whether the use is commercial; (2) whether the use is necessary for and relevant to the primary expressive purpose of the work in which the use appears; and (3) whether the use competes with or otherwise adversely affects the value of the work of the owner or licensee of the voice or likeness rights at issue.

Unlike the Senate counterpart, the House bill includes a balancing of the equities to assess harm caused by violations.  If the harm caused by unauthorized use of one’s voice or likeness is negligible, then there is no liability.  Digital depictions or digital voice replicas involving intimate imagery constitute per se harm.  Otherwise, harm is weighed against the following considerations: (1) whether the use is necessary for and relevant to the primary expressive purpose of the work in which the use appears; (2) whether the use is transformative; and (3) whether the use constitutes constitutionally protected commentary on a matter of public concern.

Senate NO FAKES Act

In the Senate, Senators Coon, Blackburn, Klobuchar and Tillis released a draft bill, called the NO FAKES Act, for discussion with stakeholders in October 2023.  The bill has not yet been formally introduced.  As currently proposed, the bill applies to “digital replicas,” which are newly-created, computer-generated, electronic representations of the image, voice, or visual likeness of an individual that (1) are nearly indistinguishable from the actual image, voice, or visual likeness of the individual and (2) are fixed in a sound recording or audiovisual work in which the individual did not actually perform or appear.  Under the proposal, an individual (or the executor, heir, assign, or devisee of a deceased individual) has “the right to authorize the use of the image, voice, or visual likeness…in a digital replica.”  The right is a property right which is licensable in whole or in part, but for the license to be valid, (1) the licensor must be represented by counsel and the agreement must be in writing or (2) the licensing of the right must be governed by a collective bargaining agreement.  The right may also be inherited by a descendant.  Like copyright, the right survives for 70 years after an individual’s death.

Producers of unauthorized digital replicas, as well as those who knowingly publish, distribute, or transmit an unauthorized digital replica may be liable for damages of $5,000 per violation or damages suffered by the injured party, whichever is greater.  Punitive damages are available for willful violations and courts may also award attorneys’ fees under the proposal. Civil suits may be brought by the individual whose likeness was misappropriated, or by anyone who owns or controls the individual’s image, voice, or likeness.  Record labels could also bring suit for digital replicas involving sound recording artists with whom the label has an exclusive contract.  Suits must be brought within three years of discovery of the violation (or when the violation should have been discovered with due diligence).

Like its House counterpart, the NO FAKES Act draft contains exclusions from liability that aim to protect First Amendment rights.  As currently drafted, the bill would exempt use “as part of a news, public affairs, or sports broadcast or report,” as well as use for “comment, criticism, scholarship, satire, or parody.”  The draft bill also exempts use of digital replicas as “part of a documentary, docudrama, or historical or biographical work” when an individual is represented as themself.  De minimis and incidental use would also be exempted.  Notably, the use of a disclaimer that the digital replica was unauthorized or an allegation that the defendant did not participate in the creation, development, distribution, or dissemination of the applicable digital replica are not defenses.

On April 30, 2024, the Senate Judiciary Committee Subcommittee on Intellectual Property held a hearing to examine the proposed bill, featuring six witnesses including top entertainment industry executives, a law school professor, and an artist.  Key areas of discussion included how to maintain First Amendment protections, durational questions such as whether a 70-year postmortem right is appropriate, and whether the bill should include a notice and takedown system.

[1] The bill indicates that the right is exclusive to an individual’s executors, heirs, transferees, or devisees, for a period of 10 years following the individual’s death.  The right shall be terminated by “proof of the non-use of the likeness or voice of any individual for commercial purposes by an executor, transferee, heir, or devisee to such use for a period of two years subsequent to the initial ten-year period following the individual’s death.”  It is not clear if a subsequent 10-year period begins if the right is not terminated or if the two year period begins earlier than the expiration of the 10 year period. The right also terminates upon “the death of all executors, transferees, heirs, or devisees.”

[2] “Personalized cloning service” is defined as defined as an algorithm, software, tool, or other technology, service, or device the primary purpose or function of which is to produce one or more digital voice replicas or digital depictions of particular, identified individuals.