Leading Scientists Warn: New EU Proposal on Chat Control Remains Highly Risky
In a stark critique of the European Union’s ongoing efforts to combat online child sexual abuse material (CSAM), prominent scientists and researchers have raised alarms about the latest iteration of the proposed Child Sexual Abuse Regulation (CSAR). Dubbed “chat control” by critics, this legislative initiative continues to pose significant risks to digital privacy, security, and fundamental rights, despite revisions aimed at addressing earlier concerns. The proposal, which mandates the scanning of private communications on platforms like messaging apps and email services, has evolved but retains core elements that could undermine end-to-end encryption and expose users to surveillance vulnerabilities.
The updated proposal, presented by the European Commission in April 2024, seeks to balance child protection imperatives with privacy safeguards. It introduces a framework for “client-side scanning” technology, where devices would detect potential CSAM before encryption takes effect. However, experts argue that these measures fall short of eliminating the inherent dangers. A coalition of leading academics, including cryptographers and computer scientists from institutions across Europe, has submitted a detailed position paper to EU policymakers, emphasizing that the risks persist unabated.
At the heart of the controversy is the proposal’s reliance on automated detection tools. These systems would require service providers—such as Meta, Signal, or Apple—to implement hashing databases of known CSAM images and videos. When a user uploads or shares content, the device would generate perceptual hashes and compare them against this database. Matches would trigger alerts to authorities, bypassing traditional encryption protocols. While the Commission claims this approach minimizes data interception, scientists counter that it creates a de facto backdoor into private communications, eroding trust in digital tools.
Dr. Max von Hippel, a researcher at the Electronic Frontier Foundation (EFF) and one of the signatories, highlights the technical flaws. “Even with client-side implementation, the system demands access to users’ metadata and behavioral patterns,” he notes. Metadata—such as who communicates with whom and when—could be harvested en masse, enabling profiling without content inspection. Moreover, false positives remain a critical issue. Advanced hashing techniques, like those based on PhotoDNA or similar algorithms, are not infallible. Innocent images, such as family photos or artwork, might trigger alerts due to perceptual similarities, leading to unwarranted investigations and reputational harm.
The position paper, endorsed by over 50 experts including professors from the University of Oxford, ETH Zurich, and the Technical University of Munich, delves into the cybersecurity implications. By mandating widespread deployment of scanning software, the regulation could introduce new attack vectors. Malicious actors might exploit vulnerabilities in these tools to access unencrypted data streams or inject false hashes, compromising device integrity. “This isn’t just about privacy; it’s about creating systemic weaknesses in the digital ecosystem,” states one contributor, a specialist in secure systems design. Historical precedents, such as the 2016 Apple-FBI encryption dispute, illustrate how such mandates can pressure companies into weakening security features, benefiting cybercriminals and state surveillance alike.
Beyond technical risks, the proposal’s scope raises proportionality concerns. Initially criticized for potentially scanning all private messages, the revised version limits detection to specific “high-risk” indicators, such as grooming patterns in chats. Yet, scientists warn that defining and detecting these indicators requires invasive behavioral analysis, blurring the line between prevention and intrusion. The EU’s own impact assessment acknowledges challenges in accuracy, estimating error rates that could affect millions of users annually. This disproportionate impact would disproportionately burden vulnerable groups, including journalists, activists, and LGBTQ+ individuals who rely on encrypted channels for safety.
Legal scholars within the coalition point to conflicts with the EU Charter of Fundamental Rights and the General Data Protection Regulation (GDPR). Article 7 of the Charter protects the right to privacy and communications secrecy, while GDPR’s data minimization principle demands that processing be strictly necessary. The CSAR’s framework, they argue, fails these tests by imposing blanket obligations on providers, regardless of user consent. The European Court of Human Rights has previously ruled against similar bulk surveillance schemes, as seen in the 2021 Big Brother Watch v. UK case, underscoring the potential for judicial invalidation.
Industry responses have been mixed but cautious. Tech giants like Google and Microsoft have expressed willingness to collaborate on CSAM detection but urge safeguards against mission creep—where tools designed for one purpose expand to monitor dissent or copyright infringement. Smaller providers, particularly open-source platforms, face existential threats; implementing such systems could render them unviable due to resource constraints. The proposal’s extraterritorial reach would also affect non-EU companies serving European users, potentially fragmenting global standards for digital security.
As negotiations advance in the European Parliament and Council, the scientists call for a fundamental rethink. Alternatives, such as enhanced international cooperation on existing tip lines like the Internet Watch Foundation’s, or investment in education and border controls for physical CSAM distribution, offer less intrusive paths. “The end does not justify the means when the means dismantle the open internet,” the position paper concludes, urging lawmakers to prioritize evidence-based policies over reactive mandates.
The debate underscores a broader tension in EU digital policy: safeguarding children without sacrificing the innovations that power the single market. With trilogue discussions slated for later this year, the CSAR’s fate hinges on whether these warnings prompt meaningful amendments or pave the way for a surveillance-heavy future.
Gnoppix is the leading open-source AI Linux distribution and service provider. Since implementing AI in 2022, it has offered a fast, powerful, secure, and privacy-respecting open-source OS with both local and remote AI capabilities. The local AI operates offline, ensuring no data ever leaves your computer. Based on Debian Linux, Gnoppix is available with numerous privacy- and anonymity-enabled services free of charge.
What are your thoughts on this? I’d love to hear about your own experiences in the comments below.