Beyond the Screens: Deepfakes and Sexual Violence

Written By Giuliana Luz Grabina 

Edited By Kaileigh-Anne Grnak

Content warning: This article contains discussions of digital sexual abuse.

This past week, Twitter users’ feeds were flooded with disturbing, non-consensual sexual deepfakes of Taylor Swift. In response, X (formely known as Twitter) asserted that it was “actively removing” the images and taking “appropriate actions” against the account involved in spreading them, leading to the prompt removal of the images within 17 hours. Yet, numerous deepfake victims are not as fortunate, lacking the means or influence to accomplish the removal of deepfakes created in their likeness. 

Deepfakes, a form of synthetic media infamous for their ability to merge, combine, replace, and superimpose images and videos, have become a focal point of controversy owing to their highly realistic yet entirely fabricated nature. Deepfake technology includes images, audio, or videos, created or altered using advanced artificial intelligence (AI) techniques such as deep learning algorithms, a technique which teaches computers to process data in a way that simulates how human brains process data. 

The rise of deepfake technology is worrying, especially for women and girls. As Danielle Citron, a law professor at Boston University, explains: “Deepfake technology is being weaponized against women,” as malicious actors increasingly exploit these tools to create non-consensual sexual deepfakes of celebrities, and even ordinary women and girls they know in real life.   

Navigating the Threat of Deepfakes on Women 

The creation of AI deepfakes and its increased availability and distribution has challenged the principles of internet safety and harassment with which we are familiar; anyone can become a victim, irrespective of their level of fame and prominence. Non-consensual deepfake pornography is just a click away on popular search engines. It is a new means to perpetuate sexual violence—all from the comfort of people’s homes and without the need for explicit images or footage of the intended target. This new threat raises a troubling possibility: practically anyone who has taken a selfie or posted a picture of themselves online runs the risk of having a deepfake created in their image.  

Although deepfakes can be made in anyone’s image, the vast majority of deepfakes depict women. Indeed, according to a study conducted in 2019 by Sensity AI, The State of Deepfakes 2019 Landscape, Threats, and Impact, non-consensual sexual deepfakes make up 96% of deepfakes, and of those, 99% are of women. While celebrities, politicians, and other high-profile individuals often dominate the discourse on deepfakes, ordinary women and girls are increasingly targeted. 

Beyond Celebrities: Ordinary Targets 

In 2019 Vice unravelled the existence of a disturbing app called DeepNude that used AI to “undress” women. The app, using adversarial networks (the algorithm behind deepfakes), allowed users to upload a photo of a clothed woman in exchange for a realistic photo of her seemingly naked. The more revealing the picture, the better the results, yet the app did not work on men. Within 24 hours, the Vice article had sparked so much backlash that the app was swiftly taken down. However, the Sensity AI report found that very similar technology was being used by a publicly available bot on the messaging app Telegram. In fact, as of July 2020, the bot had already been used to target and “undress” at least 100,000 women—the majority of whom were likely unaware. Giorgio Patrini, CEO and chief scientist of Sensity, notes, “Usually it’s young girls. Unfortunately, sometimes it’s also quite obvious that some of these people are underage.” This underscores the pressing need for robust countermeasures and parental awareness campaigns to protect children and teens from the potential harms posed by deepfake technology, a threat that disproportionately targets young girls. The report also found that the majority of users—around 63%—used the bot to “undress” women and girls they know in real life. 

Conclusion 

These disturbing findings raise concerns on the growing prevalence of non-consensual deepfake pornography, prompting questions about the measures technology companies, legislators, policymakers, and government officials are undertaking to curtail their spread and ensure accountability for perpetrators. While there have been localized attempts to address the issue, a notable gap exists at the federal level in the US and Canada, as there are currently no specific laws against the creation or dissemination of deepfake images. This legal vacuum allows individuals to exploit AI technology to generate and circulate nonconsensual sexual deepfakes without facing legal consequences. Despite their fabricated nature, one thing is clear: the harm of deepfakes is undeniably real, and urgent actions are needed at both the legislative and technological fronts to safeguard women and girls and hold perpetrators accountable. 

Bibliographies

CBC. “Taylor Swift Deepfakes Taken Offline. It’s Not so Easy for Regular People,” 2019. 

Dunn, Suzie. “Women, Not Politicians, Are Targeted Most Often by Deepfake Videos.” Centre for International Governance Innovation, March 3, 2021. 

Hao, Karen. “A Deepfake Bot Is Being Used to ‘Undress’ Underage Girls.” MIT Technology Review. MIT Technology Review, October 20, 2020. Koster, Alexandra. “The Deepfakes of Taylor Swift Prove yet Again How Laws Fail Women.” Refinery29.com. Refinery29, January 26, 2024. 

Rahman-Jones, Imran. “Taylor Swift Deepfakes Spark Calls in Congress for New Legislation.” Bbc.com. BBC News, January 26, 2024.. 

Sample, Ian. “What Are Deepfakes – and How Can You Spot Them?” the Guardian. The Guardian, January 13, 2020. 

Tenbarge, Kat. “Google, Bing Put Deepfake Porn at the Top of Some Search Results.” NBC News. NBC News, January 11, 2024. ‌

Women In International Security -. “Deepfakes as a Security Issue: Why Gender Matters – Women in International Security,” November 4, 2020.

Leave a comment