Back
privacy

The Encryption Fight Isn't Over. It Just Got Quieter.

The EU's mass message scanning proposal was revised, not abandoned. The new version is quieter, more indirect — and for that reason, more difficult to stop.

The version of Chat Control that dominated the headlines is gone. The proposal that would have forced platforms to scan every private message, broken end-to-end encryption, and flagged millions of ordinary conversations to law enforcement — that version is off the table. The EU Parliament made that clear. And the interim scanning law that held things together expired on April 3, 2026.

What's left is Chat Control 2.0. It's more complicated. And it's designed to be.


What Just Happened

On March 11, the European Parliament voted 458 to 103 to extend the existing temporary scanning framework — but with a critical condition. Any scanning had to be strictly targeted, limited to users specifically identified by a judge as reasonably suspected of involvement in child sexual abuse. Mass, indiscriminate scanning of entire user populations would not be permitted.

The Council — representing EU member state governments — rejected that condition. They wanted broader scanning powers, without the judicial requirement. Trilogue negotiations broke down.

Then came an unusual move. Around March 20, conservative factions pushed for a vote on March 26. Digital rights group EDRi argued publicly that the Council had not accepted a single one of Parliament's substantive demands, and that the push was an attempt to rewrite the outcome after negotiators failed to get what they wanted. The vote failed. By 307 votes to 306 — a single vote — Parliament refused to extend the framework on the Council's terms. The interim law expired on April 3 with no replacement in place.

That is the win. It is not the end.

We covered the lead-up to this vote in detail in our March 18 piece. What follows is what happened next.


What Chat Control 2.0 Actually Proposes

The permanent regulation — formally called the Child Sexual Abuse Regulation, or CSAR — has been in negotiation since 2022. The Council's revised text dropped the most controversial mandatory encryption-breaking requirement. It preserved something more subtle.

Three components are worth understanding.

Voluntary scanning with consequences. The mandatory detection orders are gone — for now. Platforms can choose to scan unencrypted messages. But that "choice" comes with regulatory architecture attached. Platforms that don't scan face mandatory risk assessments and must demonstrate they have adopted "all reasonable mitigation measures." Scanning becomes the path of least resistance. The EFF has warned that this model could lead to private mass-scanning of non-encrypted services and pressure big providers to limit the kinds of secure communication tools they offer. The word "voluntary" is doing a lot of work in a framework where the alternative is regulatory hostility.

Risk mitigation obligations. Services classified as high-risk — which includes most social media and messaging platforms — must actively demonstrate they are addressing the risk of child abuse on their platforms. Patrick Breyer, the German digital rights lawyer who has tracked this legislation since it began, has described these requirements as effectively punishing privacy-respecting services — forcing them to implement surveillance tools to avoid liability. Platforms that protect user privacy are treated as suspect. Platforms that scan are compliant.

Mandatory age verification. This is the element that gets the least attention and may matter most. Chat Control 2.0 drops mandatory scanning of end-to-end encrypted messages, but retains a requirement that users verify their age before accessing encrypted messaging services. Under that framework, anonymous encrypted communication ends. The encryption survives. The anonymity doesn't.

This is a meaningful distinction that gets lost in most coverage. End-to-end encryption protects the content of what you say. Age verification eliminates the ability to communicate without being identified. A government cannot read your messages — but it knows who you are, when you sent them, and who you sent them to. For journalists, activists, abuse survivors, and anyone who relies on the ability to communicate privately, that is not a minor concession. The infrastructure of surveillance doesn't need to read your messages if it knows everything else.


Why the Revised Version Is More Dangerous

The original Chat Control proposal was relatively easy to explain. It would have broken encryption. Full stop. That argument landed with technologists, civil liberties groups, and a significant portion of Parliament.

Chat Control 2.0 does not break encryption directly. It surrounds encryption with conditions that make it difficult to use privately. Voluntary scanning pressure. Risk mitigation liability. Mandatory identity verification. None of these individually constitute a backdoor in the technical sense. Together, they achieve a similar outcome through regulatory architecture rather than code.

That's a deliberate strategic adjustment. Laws that ban a technology are easy to identify and challenge. Laws that create conditions where using that technology safely becomes legally or commercially impractical are easier to pass unnoticed and impossible to explain in a headline. By the time most people understand what this version of the law actually does, it may have already passed.


The Timeline

Trilogue negotiations are running on a fixed schedule. The April 16 session focuses on the legal framework for detection orders and the treatment of encryption — the most consequential session in the near term. A third session is scheduled for May 4, covering risk assessment obligations. A fourth and presumably final negotiation is set for June 29, with formal adoption by Parliament and Council expected in July 2026.

The expiration of the interim law has added pressure. Proponents of broader scanning are framing the current absence of any legal framework as a "regulatory gap" — arguing that children are now less protected than they were two weeks ago. Whether that framing gains traction in the June session will largely determine what the final regulation looks like.

The April 16 session is the most important near-term indicator. If the Council moves toward accepting Parliament's position — targeted scanning under judicial authorization only, no age verification mandate, encryption explicitly protected — there is a path to a regulation that addresses child safety without dismantling private communication infrastructure. If the Council holds, June becomes a high-stakes negotiation between two institutions with fundamentally different views of what this law should do.


What to Watch

The Center for Democracy and Technology has noted that the forthcoming regulation presents an opportunity to protect against both a reintroduction of indiscriminate mass scanning and other mechanisms that undermine online anonymity — and that mitigation measures must not inadvertently harm the people the legislation is supposed to protect.

That framing matters. The debate is no longer about one bad proposal that can be blocked. It is about a set of interlocking obligations that, individually, sound reasonable — reduce risk, protect children, verify age — and, in combination, produce a surveillance framework without anyone having to call it one.

The debate is not over. It has moved into a phase where it is harder to follow and easier to lose.