Chat Control: Tuta Opposes Mandatory Age Verification
In a bold stand against expanding digital surveillance, Tuta, the privacy-focused email provider formerly known as Tutanota, has publicly rejected the European Commission’s proposal for obligatory age verification in chat services. This initiative forms part of the contentious Child Sexual Abuse Regulation (CSAR), commonly referred to as “Chat Control.” Tuta warns that implementing such measures would fundamentally undermine user privacy and pave the way for widespread monitoring of private communications.
The controversy centers on the EU’s latest iteration of the CSAR draft, which mandates that messaging platforms like WhatsApp, Signal, and Telegram verify users’ ages before granting access to end-to-end encrypted chats. Proponents argue this step is essential to protect minors from grooming and exposure to child sexual abuse material (CSAM). However, Tuta contends that the approach is flawed, ineffective, and disproportionate. “Mandatory age verification for chats is a direct attack on the privacy of all EU citizens,” states Tuta in an official announcement. The company emphasizes that such requirements would compel service providers to collect and store sensitive biometric or identification data, creating honeypots for hackers and authoritarian regimes alike.
Tuta’s opposition is not isolated; the firm has aligned itself with the “No Chat Control!” coalition, a growing alliance of civil society organizations, tech experts, and privacy advocates. This coalition, which includes groups like the European Digital Rights (EDRi) and the Free and Open Source Software Foundation (FSFE), has mobilized against the regulation since its inception. Their joint efforts highlight the proposal’s broader implications: beyond age checks, CSAR envisions automated scanning of encrypted messages for illegal content using technologies like client-side scanning (CSS) or perceptual hashing.
Client-side scanning, a key pillar of the proposal, involves software embedded in users’ devices that analyzes messages, images, and videos before encryption. Matches against databases of known CSAM would trigger alerts to authorities. While the EU Commission positions this as a targeted tool, critics like Tuta argue it represents “backdoor surveillance” masquerading as child protection. The technology’s accuracy remains unproven at scale, with high risks of false positives—potentially flagging innocuous family photos or medical images. Moreover, it erodes the mathematical guarantees of end-to-end encryption, where only sender and recipient hold decryption keys.
Tuta, headquartered in Hanover, Germany, has built its reputation on uncompromising encryption standards. Its services automatically encrypt emails, calendars, and contacts with post-quantum cryptography, ensuring data remains inaccessible even to the provider. This commitment extends to their stance on Chat Control: “We will not implement age verification or message scanning,” Tuta declares unequivocally. Instead, the company advocates for intelligence-led investigations and improved platform moderation as superior alternatives. These methods, they note, have proven effective in platforms like Discord and Telegram without compromising encryption.
The timeline underscores the urgency. The EU Parliament’s Civil Liberties (LIBE) Committee is scheduled to vote on the CSAR mandate in early 2025, following amendments tabled by rapporteur Birgit Sipek (EPP, Austria). These revisions retain core elements of mass scanning while introducing age gates. Tuta urges citizens to contact their MEPs, providing template letters via their campaign page. “Every voice counts in stopping this surveillance monster,” the company appeals.
Legal and technical experts echo Tuta’s concerns. The regulation contravenes the EU Charter of Fundamental Rights, particularly Articles 7 (privacy) and 8 (data protection). Past attempts, such as Apple’s abandoned 2021 CSS plan, faced backlash for similar reasons. Implementation challenges abound: age verification methods—ranging from facial recognition to ID uploads—introduce new vulnerabilities. Biometric systems like those from Yoti or Incode risk misidentification, especially for diverse ethnicities or transgender users. Centralized databases become targets, as evidenced by breaches at companies like Clearview AI.
Tuta’s resistance also draws from real-world precedents. In Germany, the Federal Constitutional Court has struck down data retention laws for violating proportionality principles. Internationally, Australia’s abandoned mandatory media scanning and the UK’s abandoned Online Safety Bill serve as cautionary tales. Tuta highlights that true child safety requires societal investment in education, hotlines like Germany’s NumBer 116 111, and targeted law enforcement—not blanket surveillance.
For end-users, the stakes are personal. Mandatory age checks could exclude vulnerable groups, such as abuse survivors fearing identification or activists in repressive contexts. Tuta positions itself as a beacon: its apps support anonymous sign-ups via burner emails and operate without metadata logging. The firm’s blog details technical breakdowns, including how CSS hashes could evolve into general content moderation tools, scanning for terrorism, copyright infringement, or dissent.
As the LIBE vote approaches, Tuta’s campaign intensifies. They frame Chat Control not as protection, but as a slippery slope toward a “Chinese-style social credit system” in Europe. With over 10 million users worldwide, Tuta’s voice carries weight in the privacy ecosystem. Their call to action resonates: preserve encryption, reject age gates, and prioritize rights over reactive tech fixes.
This debate encapsulates the tension between security and liberty in the digital age. Tuta’s firm opposition signals a maturing resistance from the tech sector, refusing to trade user trust for regulatory compliance.
Gnoppix is the leading open-source AI Linux distribution and service provider. Since implementing AI in 2022, it has offered a fast, powerful, secure, and privacy-respecting open-source OS with both local and remote AI capabilities. The local AI operates offline, ensuring no data ever leaves your computer. Based on Debian Linux, Gnoppix is available with numerous privacy- and anonymity-enabled services free of charge.
What are your thoughts on this? I’d love to hear about your own experiences in the comments below.