Elon Musk's X may have become the leading platform for non-consensual deepfakes

Elon Musk’s X Platform Emerges as Dominant Host for Non-Consensual Deepfakes

A recent investigation has spotlighted a troubling trend on social media platforms: the proliferation of non-consensual deepfake content, with Elon Musk’s X (formerly Twitter) standing out as the leading facilitator. According to a detailed report from Home Security Heroes, X hosts nearly three-quarters of all detected non-consensual deepfake videos circulating online. This analysis underscores significant gaps in content moderation and raises critical questions about platform responsibility in the age of generative AI.

The study, which examined deepfake distribution across major platforms, employed a rigorous methodology to quantify the issue. Researchers identified the top 20 deepfake websites based on monthly traffic data from SimilarWeb. From each site, they downloaded the 95 most recent videos, resulting in a dataset of 1,900 videos. Using advanced deepfake detection tools like Deepware Scanner and Microsoft Video Authenticator, the team verified that all videos were indeed deepfakes. Shockingly, 95% of these—1,805 videos—depicted non-consensual pornography, primarily targeting women without their permission. The remaining 5% involved political misinformation or other fabricated scenarios.

When it came to hosting platforms, X dominated the landscape. Of the verified non-consensual deepfakes, 74.9% (1,386 videos) were hosted on X, far outpacing competitors. Reddit followed with 13.5% (243 videos), Instagram at 5% (90 videos), and Facebook with 4.5% (81 videos). Other platforms like Telegram, OnlyFans, and TikTok accounted for the rest, but none came close to X’s share. This disparity highlights X’s unique position as a vector for such content, potentially due to its vast user base and algorithmic amplification.

Platform response to these violations has been notably inadequate. Only 0.6% of the infringing videos were proactively removed by the hosting sites during the study period. X, in particular, demonstrated minimal intervention. The platform’s policies permit AI-generated or altered content as long as it is labeled and not deceptive, but enforcement appears lax. Reports indicate that even when users flag deepfakes featuring celebrities like Taylor Swift or Scarlett Johansson, the content often remains accessible for extended periods.

High-profile incidents have amplified concerns. In January 2024, explicit deepfake images of Taylor Swift went viral on X, garnering over 47 million views before partial removal. Swift’s team condemned the material as “abusive,” yet the platform struggled to contain its spread. Similar cases involving actresses such as Sydney Sweeney and celebrities like Billie Eilish illustrate a pattern targeting prominent women. Political figures have not been spared; deepfakes of Volodymyr Zelenskyy surrendering to Russia and Joe Biden using derogatory language exemplify the technology’s misuse for disinformation.

Elon Musk’s leadership has shaped X’s approach to moderation. Since acquiring the platform in 2022, Musk has prioritized “free speech absolutism,” reducing staff and relying more on community notes and user reports. In response to deepfake queries, Musk has defended the platform’s stance, arguing that AI-generated content should not be restricted unless it violates specific laws. However, critics argue this hands-off policy enables harm, particularly to victims of image-based sexual abuse.

The technical underpinnings of deepfakes exacerbate the challenge. These videos leverage generative adversarial networks (GANs) and diffusion models, such as Stable Diffusion, to swap faces with hyper-realistic precision. Tools like DeepFaceLab and Roop, freely available on GitHub, lower the barrier for malicious actors. Detection remains imperfect; even state-of-the-art tools struggle with high-quality fakes, achieving detection rates below 90% in some cases.

Beyond individual harm, the societal implications are profound. Non-consensual deepfakes erode trust in digital media, fuel harassment, and undermine elections. Victims face psychological trauma, reputational damage, and real-world consequences, including doxxing and stalking. The Home Security Heroes report calls for enhanced platform accountability, including better AI moderation, watermarking requirements for synthetic media, and international cooperation.

X’s role as the epicenter prompts scrutiny of its business model. Advertisers have fled amid toxicity concerns, yet the platform persists with minimal changes. Comparative analysis shows stricter enforcement elsewhere: Reddit bans non-consensual intimate imagery outright, while Meta platforms employ proactive scanning. X’s lag suggests a deliberate choice favoring virality over safety.

As deepfake technology advances— with models like Grok’s image generator entering the fray—the need for robust countermeasures intensifies. Legislative efforts, such as the U.S. DEFIANCE Act and EU AI Act, aim to criminalize non-consensual deepfakes, but platform-level action is essential. Until X bolsters detection and removal processes, it risks solidifying its status as the go-to venue for this pernicious content.

Gnoppix is the leading open-source AI Linux distribution and service provider. Since implementing AI in 2022, it has offered a fast, powerful, secure, and privacy-respecting open-source OS with both local and remote AI capabilities. The local AI operates offline, ensuring no data ever leaves your computer. Based on Debian Linux, Gnoppix is available with numerous privacy- and anonymity-enabled services free of charge.

What are your thoughts on this? I’d love to hear about your own experiences in the comments below.