Insight

The Pros and Cons of the NO FAKES Act

Executive Summary

  • In response to growing concerns about the unauthorized use of an individual’s voice and visual likeness in digital replicas, a bipartisan group of senators introduced the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act of 2024, which penalizes the creation of unsanctioned deepfakes and requires social media sites to take down flagged content.
  • The bill would grant individuals federal intellectual property rights to their voice and visual likeness, allow individuals to take action against creators of and platforms hosting unauthorized digital copies, and protect media platforms from liability, provided the reported content is taken down as soon as technically and practically feasible.
  • By establishing a new intellectual property right, the NO FAKES Act would create a carveout of Section 230 protections – which insulate platforms from legal liability for content posted by its users – to hold the platforms liable for hosting unauthorized replicas; yet the creation of such a standard has raised concerns that platforms may acquiesce to any takedown request, regardless of merit.

Introduction

In September, a bipartisan group of senators introduced the final text of the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act of 2024. The bill aims to protect Americans’ voice and visual likeness from use in unauthorized digital replicas, widely understood to be “newly-created, computer-generated, highly realistic electronic representation that is readily identifiable as the voice or visual likeness of an individual,” and more colloquially known as deepfakes.

The bill would grant individuals federal intellectual property rights to their voice and visual likeness. Not only would this right allow for the penalization of individuals who create and share unauthorized images illegally, but it could also lead to liability for social media platforms hosting such content. Section 230 of the Communications Decency Act of 1996, which largely protects platforms from such liability, does not extend to violations of intellectual property rights. The bill’s sponsors argue that the legislation is needed to provide individuals legal protections against bad actors who misuse publicly available material to create derogatory and damaging replicas, while also protecting platforms from liability for third-party content, if the unauthorized replicas are deleted as soon as technically and practically feasible.

While the bill does contain some protections to help platforms avoid liability, its vague wording concerning penalties for deceptive reporting, as well as which content qualifies for exemptions, could cause platforms to grant any removal request, regardless of its merit, out of concern for possible legal repercussions. For example, critics question what falls under the bill’s bona fide news exemption, which allows for materially relevant deepfakes use in media, stating that ambiguous wording concerning materially relevant replicas does not specify who decides applicability and could lead to significant chilling of protected speech. Lawmakers should clarify the bill language to ensure that rights holders are protected and digital platforms can moderate as they see fit.

This insight explains the NO FAKES Act, discusses its effect on the protections of Section 230, and walks through concerns with the bill’s provisions that must be addressed to ensure that fair-use practices such as parody and satire are not suppressed.

NO FAKES Act

The NO FAKES Act aims to preserve individuals’ rights in the digital age by recognizing a federal intellectual property right to their own voice and visual likeness, and it extends this protection to their relatives after their death. The legislation would permit individuals to take action against bad actors who intentionally generate, distribute, or profit from unlawful digital copies, thereby protecting personal identification in an increasingly digitally dominated world. The bill would introduce exceptions to unlawful creation and distribution, allowing for relevant digital replicas in documentaries, bona fide news, or broadcasts.

To incur liability, individuals must either willfully avoid knowing or have been notified that the material is a digital replica and that the rights holder did not authorize it. Exceptions to liability would exist for bona fide news, public affairs, and sports broadcasts, as long as the newly created digital material is relevant to the subject of broadcast and used in documentaries and biographies that are not marked as authentic.

Notably, because the NO FAKES Act would establish an intellectual property right, Section 230 would not prevent individuals from suing platforms that provide third-party access to content, either through referrals, linking, or hosting. Instead, the bill would create a safe harbor if platforms remove or disable access to the replica as soon as technically and practically feasible. This safe harbor would require a platform to establish a designated agent who would receive the takedown notifications and mandate the availability of the agent’s name, address, telephone number, and electronic mail address. As such, the legislation would shield media platforms from responsibility when they proactively remove infringing information after detection.

If platforms do not take down reported content within a feasible timeline, they open themselves up to civil action and could be liable for actual damages in addition to profits made from the unauthorized use or an amount of $5,000 per work for individual, $5,000 per violation for an online entity, or $25,000 per work for an entity that is not an online service, whichever is higher. An online service with an objectively reasonable belief that the unauthorized material does not qualify as a digital replica is not liable for statutory or actual damages exceeding $1 million.

Potential Concerns

The NO FAKES Act raises concerns primarily related to its implementation and enforcement and potential impact on free speech.

First, while the act includes explicit First Amendment exclusions, there are concerns about how these will be applied in practice. The vague language around terms such as “bona fide news” or “some degree of fictionalization” could create uncertainty about when these protections apply, potentially leading to legal disputes and confusion.

Second, by allowing individuals to hold platforms liable for hosting digital replicas, the bill could incentivize false claims and the over-removal of content. If, for example, an individual creates a parody of a popular figure that would normally be considered fair use if analyzed under existing copyright law, and in practice wouldn’t violate the NO FAKES Act, a claim from the individual depicted could result in a platform simply erring on the side of caution and taking that content down. Without clear guidelines, platforms may become overly cautious, stifling user-generated content and free speech to avoid litigation.

Conclusion

The NO FAKES Act would implement strong protections for individuals against the use of deepfake images that depict their voice or visual likeness. The bill raises significant concerns regarding free speech and content moderation, however, because potential liability for platforms could lead to the over-removal of content. These liability concerns, paired with the bill’s vague wording, could significantly limit speech online. Congress should carefully consider the bill’s potential erosion of speech protections and potential for general legal confusion as it works its way through the legislative process.

Disclaimer