Irish Data Protection Commission Launches Probe into AI-Generated Deepfakes on X Platform
The Irish Data Protection Commission (DPC), Ireland’s supervisory authority for data protection and privacy, has initiated a formal investigation into X, the social media platform formerly known as Twitter and owned by Elon Musk. The probe centers on the processing of personal data used to create and disseminate AI-generated deepfake images, many of which depict individuals without their consent. Announced on October 17, 2024, this inquiry underscores growing regulatory scrutiny over the unchecked proliferation of synthetic media on major online platforms.
At the heart of the investigation is X’s handling of personal data in the context of Article 58(1)(e) of the General Data Protection Regulation (GDPR). This provision empowers data protection authorities to conduct inquiries to assess compliance with EU-wide data protection laws. Specifically, the DPC is examining whether X’s systems and policies adequately safeguard personal data when it is scraped, processed, or repurposed by generative AI tools to produce deepfake content. Such images often superimpose individuals’ faces onto explicit or misleading scenarios, raising serious concerns about privacy violations, consent, and the right to one’s likeness.
Deepfakes, powered by advanced machine learning algorithms, have surged in popularity on X since the platform’s policy updates under Musk’s leadership. These updates have relaxed content moderation, allowing a broader range of AI-generated media to circulate freely. Reports indicate that many deepfakes target public figures, influencers, and ordinary users alike, frequently involving non-consensual pornography or fabricated compromising situations. The DPC’s action follows complaints and observations of widespread instances where users’ images—drawn from public posts or external sources—are manipulated without permission, potentially breaching GDPR principles such as lawfulness, fairness, and transparency in data processing.
X has confirmed receipt of the DPC’s notice and expressed willingness to cooperate fully. In a statement, the platform noted: “X has been contacted by the Irish Data Protection Commission (DPC), which has opened an investigation into X’s processing of personal data for the generation and display of AI-generated deepfake images, which appear to depict individuals without their consent.” This engagement aligns with standard procedures under GDPR, where platforms must respond to regulatory inquiries, provide documentation on data handling practices, and demonstrate compliance measures.
The investigation arrives amid a broader European crackdown on AI misuse. The DPC, headquartered in Portlaoise, Ireland, has jurisdiction over many tech giants due to their European headquarters in Dublin, including Meta, TikTok, and now X. Recent precedents include fines levied against platforms for failing to curb harmful content, such as the DPC’s ongoing cases involving child safety and algorithmic biases. In the realm of AI, the EU’s AI Act, which entered into force in August 2024, classifies deepfake generators as high-risk systems requiring transparency obligations, like watermarking synthetic content. While the DPC’s probe predates full AI Act enforcement (set for 2026), it leverages existing GDPR tools to address immediate risks.
Technical aspects of deepfake creation exacerbate the issue. Generative adversarial networks (GANs) and diffusion models, commonly used in tools like Stable Diffusion or Midjourney, train on vast datasets of images scraped from the web, including X posts. Without robust controls, such as opt-out mechanisms or content filters, personal data flows into these models unchecked. X’s Grok AI, developed by Musk’s xAI, has also faced criticism for generating controversial images, though the DPC inquiry focuses on third-party deepfakes amplified on the platform rather than proprietary tools.
For users, the implications are profound. GDPR Article 5 mandates that personal data be processed lawfully and minimized, yet deepfake proliferation often involves unauthorized biometric data extraction—faces as unique identifiers. Victims may suffer reputational harm, emotional distress, or real-world consequences from viral falsehoods. The DPC’s cross-border scope could lead to decisions enforceable across the EU, potentially resulting in corrective measures, fines up to 4% of global annual turnover, or mandated system overhauls.
As the investigation unfolds, stakeholders await X’s submissions and any preliminary findings. The DPC emphasized its commitment to protecting individuals’ fundamental rights in the digital age, stating that rapid advancements in AI necessitate proactive enforcement. Platform operators, in turn, may need to bolster detection algorithms, enforce labeling for AI content, and integrate GDPR-compliant data pipelines to mitigate future risks.
This case highlights the tension between innovation and regulation in the AI era. While X champions free speech and minimal moderation, European authorities prioritize data subject rights, setting the stage for pivotal rulings on synthetic media governance.
Gnoppix is the leading open-source AI Linux distribution and service provider. Since implementing AI in 2022, it has offered a fast, powerful, secure, and privacy-respecting open-source OS with both local and remote AI capabilities. The local AI operates offline, ensuring no data ever leaves your computer. Based on Debian Linux, Gnoppix is available with numerous privacy- and anonymity-enabled services free of charge.
What are your thoughts on this? I’d love to hear about your own experiences in the comments below.