41043
post-template-default,single,single-post,postid-41043,single-format-standard,stockholm-core-2.4,qodef-qi--no-touch,qi-addons-for-elementor-1.6.7,select-theme-ver-9.5,ajax_fade,page_not_loaded,,qode_menu_,wpb-js-composer js-comp-ver-7.9,vc_responsive,elementor-default,elementor-kit-38031
Title Image

Deepfakes & Policy Implications of the First Amendment & Right of Publicity

Deepfakes & Policy Implications of the First Amendment & Right of Publicity

In 2022, a fake and heavily manipulated video of Ukrainian President Volodymyr Zelenskiy yielding to Russian demands, circulated on various social media platforms.[1] Experts claimed that even though there were some obvious tells that screamed “fake”—such as Zelenskiy’s accent being off, the discrepancies between the skin tone on his neck and face, and the pixelation around his head[2]—the video gained serious traction on social media and within the Ukrainian state.[3] Experts warned that this would be the “tip of the iceberg” in wars of disinformation and that deepfakes would only become more realistic.[4]

In 2024, Taylor Swift, and various other female celebrities and streamers, fell victim to hyper-realistic deepfake pornography that circulated X and Telegram, with some of the videos having over 47 million views before finally being removed.[5] In Hong Kong, a finance worker at a multinational firm was tricked into paying over $25 million to fraudsters using deepfake technology—he attended a conference with the CFO and other members of staff only to find out everyone he thought he saw in the conference was fake recreations of deepfake technology.[6]

Just as the experts warned, the age of deepfakes is here, and unlike in 2022, it is becoming immensely difficult to determine whether these hyper-realistic videos are artificially created or not. Even if they are, is there any recourse for the people whose image has been used by these deepfake creators for nefarious purposes?

What are deepfakes?

Deepfakes are a type of media content generated by machine-learning algorithms combined with facial-mapping software that are fed thousands of images and trained to identify and reconstruct different types of patterns—usually faces.[7] This is known as deep learning. These algorithms are used to replace and synthesize faces, manipulate facial expressions, and synthesize speech;[8] deepfakes are typically used to depict someone appearing to say or do something that they never said or did.[9]

The application of deep learning to create deepfakes makes it difficult for people to discern whether or not images or videos are artificial.[10] Though deepfakes can have benign applications, they are typically used for more harmful purposes: much of deepfake content online is pornographic which disproportionately victimizes women[11] and it has also been used for exploitation and fraud.[12] Deepfakes can also harm society at large. Citron and Chesney detail eight huge risks to society that can result from the use of deepfakes: 1) distortion of democratic discourse, 2) manipulation of elections, 3) eroding trust in institutions, 4) exacerbating social divisions, 5) undermining public safety, 6) undermining diplomacy, 7) jeopardizing national security, and 8) undermining journalism.[13]

Policy Implications of Deepfakes

 The First Amendment

The First Amendment protects people from government infringement on the freedom of speech—artificial intelligence, like other technologies such as the printing press, is used to create and distribute speech.[14] Traditionally, the First Amendment protects people who use AI to create deepfakes.[15]

In 2018, the Senate tried to introduce a bill attempting to criminalize the “malicious creation and distribution of deepfakes”.[16] Though the bill expired at the end of the year, it will be reintroduced in the coming years,[17] and there are others being considered as well.[18] Similarly, several states are pursuing legislation to regulate deepfakes including Texas, California, and Florida, among others.[19] Aside from potential federal and state regulation, there are categories of speech that naturally fall outside of the protection of First Amendment: the most relevant for the purposes of deepfakes is defamation.[20]

To create liability for the tort of defamation there must be: 1) a false statement purporting to be fact, 2) the publication or communication of that statement to a third person, 3) fault amounting to at least negligence, and 4) damages, or some harm caused to the reputation of the party or entity who is the subject of the statement.[21] However, in a 1964 Supreme Court case, the traditional third element was replaced with a higher standard, requiring proof that the speech’s publisher acted with actual malice—the statement must be made with knowledge that it was false or with reckless disregard of whether it was false or not.[22]

Though this high burden of proof makes it more difficult to prove whether or not a publisher knew the information they published was false or their actions rose to the level of recklessness, it is more straightforward when it comes to deepfakes. Deepfakes are inherently intended to be fake and realistic—thus, there is a higher presumption of a culpable mental state. The creators of deepfakes know that the content they are creating is false and they intentionally alter every frame of the video to distort reality and make others believe that it is true.[23] Therefore, it is easier to hit the four prongs to guarantee relief through defamation.

 

The Right of Publicity

 Another route for deepfake regulation could center around the right of publicity. Publicity rights are an evolving area of state law concerning a person’s exclusive right to license their identity for commercial purposes, preventing other individuals from unauthorized commercial use of their identity.[24] Specifically, it protects one’s persona, name, nickname, or any other distinguishing aspect of an individual for commercial benefit without the individual’s consent.[25]

In March 2024, Tennessee enacted the Ensuring Likeness, Voice and Image Security (ELVIS) Act, becoming one of the first states to enact legislation that directly regulates deepfake creations that misappropriate a person’s identity.[26] California, Illinois, Kentucky and Louisiana have all proposed legislation to regulate deepfakes through the right of publicity,[27] and notably, the Congressional Research Service has suggested that “uses of artificial intelligence (AI) to create realistic images, videos, replicas, or voice simulations of real people” may require a Federal Right of Publicity Law.[28]

The right of publicity is flexible in ways other types of law may not be and thus, might provide a better avenue to address similar harms. A federal right of publicity would make the varying state laws uniform and would ensure the protection of individuals’ personal characteristics from the imminent and rising threat of deepfake technology.

But still, there’s one last question to be asked: are injunctions and damages enough to repair the harm done by deepfakes that will forever circulate the internet?

Footnotes[+]

Caitlin Stillo

Caitlin Stillo is a second-year J.D. candidate at Fordham University School of Law and a staff member of the Intellectual Property, Media & Entertainment Law Journal. She holds a B.A. in Political Science & Classical History from Lafayette College.