European Commission opens new investigation into X over Grok

European Commission Launches Formal Investigation into X over Grok AI Deployment

The European Commission has initiated formal proceedings against X, the social media platform formerly known as Twitter, under the Digital Services Act (DSA). This investigation centers on potential breaches related to the training and deployment of Grok, the generative AI chatbot developed by xAI, Elon Musk’s artificial intelligence company. Announced on July 12, 2024, the probe examines whether X’s practices comply with DSA obligations for very large online platforms (VLOPs).

The DSA, which entered full application in February 2024, imposes stringent requirements on VLOPs like X to mitigate systemic risks posed by their services. These risks include the dissemination of illegal content, impacts on civic discourse, and the protection of minors. The Commission’s action follows preliminary inquiries triggered by concerns over Grok’s integration into X’s ecosystem.

At the heart of the investigation is Grok’s training data. xAI has publicly stated that Grok was trained on publicly available posts from X, leveraging the platform’s vast repository of user-generated content. However, this approach raises questions under EU law. The Commission is assessing whether X unlawfully used personal data or content containing illegal material - such as hate speech, terrorist propaganda, or child sexual abuse material - to train the model without adequate safeguards.

Commissioners have emphasized that AI systems must respect fundamental rights, including data protection under the General Data Protection Regulation (GDPR). Vera Jourova, the Commission’s Executive Vice-President for Values and Transparency, highlighted in a statement that “the DSA requires platforms to diligently prevent illegal content from spreading and to assess foreseeable risks from systemic use of their services, including AI training.” She added that any training on EU users’ data demands transparency and user control mechanisms.

The probe also scrutinizes X’s risk assessment obligations. VLOPs must conduct annual DSA compliance reports detailing measures against systemic risks. The Commission suspects X may have failed to identify or mitigate risks associated with feeding platform data into Grok, potentially amplifying harmful content through AI generation.

Beyond training data, investigators are examining Grok’s interface and deployment on X. Questions include whether the chatbot’s responses could infringe DSA transparency rules or contribute to deceptive practices. Grok’s “fun mode,” which allows for humorous or unfiltered replies, has drawn scrutiny for potentially blurring lines between factual information and generated content, risking misinformation spread.

This is not the Commission’s first action against X. In December 2023, it opened proceedings over illegal content moderation, advertising transparency, and data access for researchers. X submitted its DSA risk assessment in January 2024, which the Commission found insufficiently addressed AI-related risks. Recent exchanges intensified after Musk announced Grok’s availability to all X Premium users in Europe on May 16, 2024, prompting further regulatory review.

X’s response has been defiant. In a post on the platform, the company stated, “We are fully committed to complying with the DSA and are confident that our Grok implementation does so.” Elon Musk has criticized EU regulators, calling the DSA a threat to free speech and accusing the Commission of overreach.

The investigation grants the Commission extensive powers, including requests for information, interviews, inspections, and interim measures. Non-compliance could result in fines up to six percent of global annual turnover. Proceedings may take months, with a decision expected by early 2025.

This case underscores escalating tensions between Big Tech and EU watchdogs. Similar probes target other VLOPs: Meta faces scrutiny over “pay or consent” models, while TikTok battles over child protection. For AI specifically, the EU AI Act, set for enforcement in 2025, will impose tiered rules on high-risk systems, but the DSA fills interim gaps for generative models.

Stakeholders await outcomes that could reshape AI training practices. Platforms may need opt-out mechanisms for data use, enhanced content filtering, or segregated training datasets. Users gain leverage, as DSA empowers complaints via national Digital Services Coordinators.

As Grok evolves - with versions like Grok-1.5 boasting improved reasoning - regulatory alignment becomes critical. xAI’s pitch of Grok as a “maximum truth-seeking AI” contrasts with EU priorities on safety and accountability, setting the stage for a pivotal clash.

Gnoppix is the leading open-source AI Linux distribution and service provider. Since implementing AI in 2022, it has offered a fast, powerful, secure, and privacy-respecting open-source OS with both local and remote AI capabilities. The local AI operates offline, ensuring no data ever leaves your computer. Based on Debian Linux, Gnoppix is available with numerous privacy- and anonymity-enabled services free of charge.

What are your thoughts on this? I’d love to hear about your own experiences in the comments below.