Schools Struggle to Address Deepfakes
School administrators are struggling to address student use of artificial intelligence (AI).[1] Beyond using AI to generate classwork, an issue that has forced schools to find new ways to spot and address cheating, administrators are now seeing an increase in students using the technology to create deepfake nude images of their classmates.[2] Deepfakes are images or videos created using AI technology. [3] AI platforms allow users to either alter original images or videos or create entirely new fake media using the technology.[4] Across several states, including Washington, New Jersey and Florida, male students have used generative AI platforms to create fake, nude images of their female classmates.[5] There are reports of boys sharing the images “in the school lunchroom, on the school bus or through group chats” on social media platforms.[6]
Students creating this sexually explicit and illegal content is a growing problem nationwide. In March, the FBI released a warning about the use of AI to create child sexual abuse material, noting that there have been incidents of teenagers using the technology to create sexually explicit images of minors “by altering ordinary clothed pictures of their classmates to make them appear nude.”[7]
The response from school administrators has been mixed.[8] When students at a California middle school generated explicit images of their female classmates, administrators alerted the police and expelled the boys who created and shared the explicit images.[9] The school superintendent said he wanted to “set a national precedent” that the creation of deepfakes is a form of “extreme bullying” that would not be tolerated.[10] In New Jersey, a high school suspended a male student for several days after he created fake explicit images of a classmate, prompting parents to demand more information on the incident and protection for the victims.[11] Some administrations have failed not only to create school policies to address deepfakes, but to recognize their legal obligations when they receive reports that the explicit images have been created.[12] Police reports show a high school in Seattle failed to inform local police when a student created explicit fake images of a fourteen and fifteen year-old student.[13]
While schools struggle to meet this new challenge, state law is playing a role in some cases. In Florida a thirteen and fourteen year-old boy were arrested and charged with third-degree felonies for creating fake nude images of their classmates using AI.[14] The boys were charged under a Florida law criminalizing the spread of deepfake nude images without the consent of the person in the images.[15]
At least ten states have passed laws addressing explicit deepfakes. Texas, Minnesota, New York, Georgia, Hawaii, and Virginia have criminalized nonconsensual deepfake porn while California and Illinois allow victims to sue the creators of the deepfake material.[16] However, there is currently no federal law addressing the issue. A bipartisan group of lawmakers is trying to change that. Senators introduced the Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024, or the DEFIANCE Act, in January.[17] The legislation would create a federal civil right of action for victims of deepfakes.[18] This right would allow victims to sue those who “produce, distribute or receive” deepfake material, if they “knew or recklessly disregarded” that the victim never consented to the creation and distribution of the content.[19] Polling shows eighty-five percent of likely voters support the legislation.[20]
Rep. Alexandria Ocasio-Cortez introduced a partner bill to the Senate proposal in the House.[21] In April she said deepfakes parallel “the same exact intention of physical rape and sexual assault, [which] is about power, domination, and humiliation” adding that the images and videos are “a way of digitizing violent humiliation against other people.”[22] In 2023 alone more than 143,000 deepfake videos were posted online.[23] According to a report by DeepTrace Labs, 96 percent of deepfake videos are nonconsensual porn, all featuring women. [24]
While lawmakers work to pass the DEFIANCE Act, President Biden has also addressed the issue. In his executive order on AI issued last fall, the President called for a report on how to prevent generative AI from producing child sexual abuse material, or any “non-consensual intimate imagery of real individuals.”[25] That report is supposed to be submitted to the Director of OMB and the President for National Security Affairs this summer.[26]
Footnotes