<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
  <title>RedPosts</title>
  <subtitle>The mechanics behind the headlines.</subtitle>
  <link href="https://redposts.com/feed.xml" rel="self" />
  <link href="https://redposts.com/" />
  <updated>2026-04-14T00:00:00Z</updated>
  <id>https://redposts.com/</id>
  <author>
    <name>RedPosts</name>
  </author>
  <entry>
    <title>VPNs Are Sold As Privacy Tools — Here&#39;s When They&#39;re Lying to You</title>
    <link href="https://redposts.com/posts/vpns-privacy-tools-lies/" />
    <updated>2026-03-14T00:00:00Z</updated>
    <id>https://redposts.com/posts/vpns-privacy-tools-lies/</id>
    <content type="html">&lt;p&gt;VPN companies spend more on advertising than almost any other category in tech. They sponsor podcasts, flood YouTube, and make claims about privacy and security that sound reassuring but rarely hold up to scrutiny. Millions of people pay monthly subscriptions believing they&#39;ve bought themselves real protection.&lt;/p&gt;
&lt;p&gt;Some of them have. Most of them haven&#39;t.&lt;/p&gt;
&lt;p&gt;The tools themselves are legitimate and useful in the right context. But the claims surrounding them have drifted so far from reality that people end up paying for protection they don&#39;t have, against threats that don&#39;t work the way they&#39;ve been told.&lt;/p&gt;
&lt;p&gt;This article breaks down what a VPN actually does, where the marketing crosses into dishonesty, and what to look for if you decide you actually need one.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-a-vpn-actually-does&quot; tabindex=&quot;-1&quot;&gt;What a VPN Actually Does&lt;/h2&gt;
&lt;p&gt;A VPN — Virtual Private Network — does one core thing: it encrypts your internet traffic and routes it through a server in another location. This hides your activity from whoever sits between you and that server.&lt;/p&gt;
&lt;p&gt;In most everyday situations, that means two parties can no longer see what you&#39;re doing online:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Your ISP (Internet Service Provider)&lt;/strong&gt; — the company that provides your internet connection&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;The local network you&#39;re connected to&lt;/strong&gt; — the Wi-Fi router at a coffee shop, hotel, airport, or office&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That&#39;s genuinely useful. Here are the situations where a VPN earns its place:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Public Wi-Fi networks&lt;/strong&gt; — On an unsecured network, other users on the same network can potentially intercept unencrypted traffic. A VPN prevents this by encrypting everything before it leaves your device.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;ISP data collection&lt;/strong&gt; — In many countries, ISPs are legally permitted to log and sell your browsing history to advertisers. A VPN blocks your ISP from seeing which sites you visit.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Bypassing censorship&lt;/strong&gt; — In countries where certain websites or services are blocked at the network level, a VPN can route traffic through a server in another country, making it appear to originate there.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Geo-restricted content&lt;/strong&gt; — Streaming platforms serve different content libraries in different regions. A VPN lets you appear to be in a different country to access that content.&lt;/p&gt;
&lt;p&gt;These are real, legitimate use cases. The problem starts when VPN companies take these genuine benefits and inflate them into something much broader — and much less honest.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;where-the-marketing-falls-apart&quot; tabindex=&quot;-1&quot;&gt;Where the Marketing Falls Apart&lt;/h2&gt;
&lt;h3 id=&quot;%22you&#39;re-completely-anonymous-online%22&quot; tabindex=&quot;-1&quot;&gt;&amp;quot;You&#39;re completely anonymous online&amp;quot;&lt;/h3&gt;
&lt;p&gt;This is the most pervasive and damaging claim in the industry.&lt;/p&gt;
&lt;p&gt;A VPN hides your traffic from your ISP and local network. It does not make you anonymous to the websites and services you actually use. Google still knows who you are the moment you&#39;re signed in. Facebook tracks you across the web regardless of your IP address. Any site where you have an account knows exactly who you are — your IP address is just one of many signals used to identify you.&lt;/p&gt;
&lt;p&gt;There&#39;s also the issue of browser fingerprinting. Websites can identify you based on a combination of your browser version, screen resolution, installed fonts, timezone, language settings, and dozens of other data points — none of which a VPN changes. Companies like Google and Meta have built entire tracking infrastructures that operate completely independently of your IP address.&lt;/p&gt;
&lt;p&gt;A VPN shifts who can see your traffic — from your ISP to your VPN provider. It does not remove you from the picture. It moves you to a different part of it.&lt;/p&gt;
&lt;h3 id=&quot;%22no-logs-policy-%E2%80%94-we-can-never-track-you%22&quot; tabindex=&quot;-1&quot;&gt;&amp;quot;No logs policy — we can never track you&amp;quot;&lt;/h3&gt;
&lt;p&gt;Almost every commercial VPN advertises a strict no-logs policy. Some of them mean it. A notable number do not.&lt;/p&gt;
&lt;p&gt;Multiple VPN providers that marketed themselves as log-free have subsequently handed user data to law enforcement — because they were keeping logs all along. The claims are unverifiable unless the provider has been independently audited by a credible third party, and even then, an audit is a snapshot in time, not a permanent guarantee.&lt;/p&gt;
&lt;p&gt;There is also a practical ceiling to what &amp;quot;no logs&amp;quot; can actually mean. Even a genuinely log-free VPN knows your payment details, your email address, and when your account was created. If you paid with a credit card and registered with a real email, the provider has enough to identify you if legally compelled to do so.&lt;/p&gt;
&lt;h3 id=&quot;%22military-grade-encryption%22&quot; tabindex=&quot;-1&quot;&gt;&amp;quot;Military-grade encryption&amp;quot;&lt;/h3&gt;
&lt;p&gt;This phrase has no technical meaning. It is a marketing term chosen to sound impressive.&lt;/p&gt;
&lt;p&gt;&amp;quot;Military-grade&amp;quot; is not a standard, a certification, or a specification. Most reputable VPNs use AES-256 encryption, which is strong and widely trusted — but so does nearly every other security tool on the market. The label adds nothing to the actual strength of the encryption.&lt;/p&gt;
&lt;p&gt;More importantly, encryption quality is only one component of a VPN&#39;s overall security. A VPN with strong encryption but DNS leaks will still expose your browsing activity. A VPN that keeps connection metadata logs still records what you&#39;re doing. The security of the full system matters — not one feature used as a headline.&lt;/p&gt;
&lt;h3 id=&quot;%22it-protects-you-from-hackers%22&quot; tabindex=&quot;-1&quot;&gt;&amp;quot;It protects you from hackers&amp;quot;&lt;/h3&gt;
&lt;p&gt;This claim is misleading in a way that creates real risk — because it gives people a false sense of security.&lt;/p&gt;
&lt;p&gt;A VPN does not protect against phishing attacks. It does not block malware. It does not compensate for weak or reused passwords. It does not stop you from downloading infected files. It does not prevent social engineering.&lt;/p&gt;
&lt;p&gt;These are the vectors through which the overwhelming majority of real cyberattacks against individuals actually happen. If you click a link in a fake email, install software from an untrusted source, or use the same password across multiple accounts, a VPN will do nothing to prevent the damage. Believing otherwise is exactly the kind of false confidence the marketing is designed to create.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-dns-leak-problem-most-users-don&#39;t-know-about&quot; tabindex=&quot;-1&quot;&gt;The DNS Leak Problem Most Users Don&#39;t Know About&lt;/h2&gt;
&lt;p&gt;Even when a VPN is working correctly, there is a specific technical failure mode that can silently undermine the entire point of using one: DNS leaks.&lt;/p&gt;
&lt;p&gt;When you type a website address into your browser, your device sends a DNS query — essentially asking &amp;quot;what&#39;s the IP address for this domain?&amp;quot; — before the actual connection is made. If your VPN is configured incorrectly, these DNS queries can bypass the VPN tunnel entirely and go directly to your ISP&#39;s DNS servers, revealing every site you visit even while the VPN appears to be active.&lt;/p&gt;
&lt;p&gt;Many VPN apps include built-in DNS leak protection, but it is worth verifying independently. Tools like dnsleaktest.com let you check whether your DNS queries are actually routing through your VPN provider or leaking out to your ISP without your knowledge.&lt;/p&gt;
&lt;p&gt;Related to this is the kill switch — a feature that cuts your internet access entirely if the VPN connection drops unexpectedly, rather than allowing your traffic to continue unprotected. Without a kill switch, a momentary VPN disconnection can briefly expose your real IP address and location. It is a basic feature that any serious VPN provider should offer and have enabled by default.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;when-you-don&#39;t-actually-need-a-vpn&quot; tabindex=&quot;-1&quot;&gt;When You Don&#39;t Actually Need a VPN&lt;/h2&gt;
&lt;p&gt;Many people are paying for something that provides no meaningful protection against their actual situation.&lt;/p&gt;
&lt;p&gt;If you primarily use the internet at home on a network you control, your local network is not your threat. Your main exposure is your ISP — and while ISP data collection is a genuine privacy concern, for most people it is a background issue rather than an active, targeted threat.&lt;/p&gt;
&lt;p&gt;If you use HTTPS websites — which now accounts for the vast majority of web traffic — your data is already encrypted between your browser and the server. Your ISP can see which domain you visited, but not what you did there.&lt;/p&gt;
&lt;p&gt;If your primary risks are malware, phishing, and account compromise — which statistically represent the most likely threats for most individuals — a VPN addresses none of them. The same money spent on a password manager and up-to-date security software addresses threats that are far more likely to affect you.&lt;/p&gt;
&lt;p&gt;A VPN is the right tool for specific, defined jobs. It is not a general-purpose privacy or security solution, and it should not be treated as one.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-to-actually-look-for&quot; tabindex=&quot;-1&quot;&gt;What to Actually Look For&lt;/h2&gt;
&lt;p&gt;If a VPN does address your situation, here is what separates credible providers from the heavily marketed alternatives:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Independent audits&lt;/strong&gt; — Look for providers that have commissioned credible third-party audits of their infrastructure and no-logs claims, and published the results publicly. This is the only meaningful verification that a no-logs policy is real — but they are the minority, not the norm.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Transparent ownership and jurisdiction&lt;/strong&gt; — Some VPN companies have unclear ownership structures or operate from countries with aggressive data retention laws. Jurisdiction matters because it determines what a government can legally compel a provider to disclose.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Open source clients&lt;/strong&gt; — Providers whose applications are open source can be independently reviewed for unexpected data collection or security vulnerabilities. Closed-source apps require a much higher degree of trust.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Avoid free VPNs&lt;/strong&gt; — Free VPN services have to generate revenue somewhere. In most cases, that means collecting and monetizing user data — the exact opposite of the stated purpose.&lt;/p&gt;
&lt;p&gt;The providers worth considering are rarely the ones with the largest advertising presence — that correlation is not a coincidence. A simple search for &amp;quot;VPN independent audit&amp;quot; or checking resources like privacyguides.org will point you toward options that have been vetted by people with no financial stake in the recommendation.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-bottom-line&quot; tabindex=&quot;-1&quot;&gt;The Bottom Line&lt;/h2&gt;
&lt;p&gt;A VPN is a useful, specific tool. It hides your traffic from your local network and your ISP. In the right context — public Wi-Fi, ISP data collection concerns, censorship bypass — it is worth having.&lt;/p&gt;
&lt;p&gt;It does not make you anonymous. It does not protect against the most common forms of cyberattack. It does not mean you cannot be identified or tracked. And no-logs policies require verification, not trust.&lt;/p&gt;
&lt;p&gt;The VPN industry invests heavily in advertising because margins are high and the claims are difficult for most consumers to evaluate. Understanding what the tool actually does — and what it does not — is the only reliable way to decide whether it solves a problem you genuinely have.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>How Google Shapes What You Think Is True</title>
    <link href="https://redposts.com/posts/how-google-shapes-truth/" />
    <updated>2026-03-15T00:00:00Z</updated>
    <id>https://redposts.com/posts/how-google-shapes-truth/</id>
    <content type="html">&lt;p&gt;When you search for something and an answer appears at the top of the page — not a link, but an answer, stated plainly, attributed to no one — most people accept it. They have no particular reason not to. It&#39;s Google. It looked it up.&lt;/p&gt;
&lt;p&gt;That moment, repeated billions of times a day, is one of the most consequential editorial acts in human history. And it happens with almost no public scrutiny.&lt;/p&gt;
&lt;p&gt;Google is not a library. It is not a neutral index of the web. It is a system built by a company with financial interests, ideological assumptions baked into its engineering choices, and an enormous commercial stake in keeping people on its properties rather than sending them elsewhere. Every search result is the output of decisions — about what sources to trust, what content to promote, what to bury, and increasingly, what to simply state as fact without citation.&lt;/p&gt;
&lt;p&gt;Understanding how that system actually works — and what it gets wrong — matters for anyone who uses it to understand the world.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-illusion-of-objectivity&quot; tabindex=&quot;-1&quot;&gt;The Illusion of Objectivity&lt;/h2&gt;
&lt;p&gt;Search engines feel objective because they operate through algorithms rather than editors. There is no masthead, no editorial board, no letters column. The results appear as if they were discovered rather than chosen.&lt;/p&gt;
&lt;p&gt;But algorithms are not neutral. They are written by people, trained on data selected by people, and optimized toward goals defined by people. Every parameter in Google&#39;s ranking system reflects a judgment about what &amp;quot;good&amp;quot; looks like — what counts as authoritative, what signals trustworthiness, what user behavior indicates satisfaction.&lt;/p&gt;
&lt;p&gt;For most of its history, Google&#39;s primary ranking signal was links: pages that other pages linked to were assumed to be more valuable. This was a reasonable heuristic in the early web. It was also gameable from day one, which launched an entire industry — search engine optimization — dedicated to manipulating it.&lt;/p&gt;
&lt;p&gt;Google has spent two decades responding to that manipulation with increasingly complex countermeasures. The result is a system of extraordinary opacity. Nobody outside the company fully understands how rankings are determined. Researchers reverse-engineer pieces of it. Leaks occasionally reveal corners of the architecture. But the core algorithm is a trade secret, and Google treats it as one.&lt;/p&gt;
&lt;p&gt;A system that shapes what billions of people believe to be true, that operates in complete secrecy, and that is accountable to no one but its shareholders deserves more scrutiny than it gets.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-the-featured-snippet-does&quot; tabindex=&quot;-1&quot;&gt;What the Featured Snippet Does&lt;/h2&gt;
&lt;p&gt;In 2012, Google began introducing what it calls &amp;quot;featured snippets&amp;quot; — boxes that appear above the standard results, presenting a direct answer to a query pulled from a webpage. The intent was to give users faster answers. The effect was something else.&lt;/p&gt;
&lt;p&gt;When Google extracts a sentence from a webpage and presents it in an answer box, it strips away context, removes the source&#39;s framing, and launders the claim through Google&#39;s implied authority. Users see the answer as Google&#39;s answer, not as a claim made by a particular website with its own perspective and interests.&lt;/p&gt;
&lt;p&gt;The consequences have been well-documented. Featured snippets have told users that presidents of the United States were members of the Ku Klux Klan, that eating rocks is beneficial for health, and that certain medications are interchangeable when they are not. These are not edge cases — they are predictable failures of a system that treats surface-level pattern matching as knowledge retrieval.&lt;/p&gt;
&lt;p&gt;More subtly, the featured snippet systematically favors certain types of claims. Simple declarative sentences get extracted; nuanced analysis does not. Sources that write in clear question-and-answer formats get promoted; sources that acknowledge complexity get passed over. The architecture of the answer box creates pressure on content producers to write in ways that the machine can easily summarize, which over time shapes what kind of information gets produced.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-autocomplete-problem&quot; tabindex=&quot;-1&quot;&gt;The Autocomplete Problem&lt;/h2&gt;
&lt;p&gt;Before you finish typing, Google offers suggestions. These suggestions are not random — they reflect what other users have searched for, filtered through Google&#39;s own policies about what it will and will not complete.&lt;/p&gt;
&lt;p&gt;Autocomplete shapes behavior in ways that are difficult to measure but hard to dispute. When a user begins typing a question and Google completes it in a particular direction, that completion influences what they search for. If certain completions are suppressed — as they are, routinely, for queries Google deems sensitive — users may never form the question they were trying to ask.&lt;/p&gt;
&lt;p&gt;Google does not publish a comprehensive list of what it suppresses in autocomplete. It acknowledges that it suppresses some categories — queries related to illegal activity, for instance, and certain political topics in certain regions. The criteria for suppression are not transparent, and they vary by country in ways that are not always explained.&lt;/p&gt;
&lt;p&gt;This is not a hypothetical concern about potential censorship. It is an active, ongoing system of editorial control over the questions people ask, exercised by a private company, invisible to users.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;knowledge-panels-and-the-construction-of-fact&quot; tabindex=&quot;-1&quot;&gt;Knowledge Panels and the Construction of Fact&lt;/h2&gt;
&lt;p&gt;For many searches — public figures, companies, historical events, scientific concepts — Google now displays a &amp;quot;knowledge panel&amp;quot; alongside the results. These panels present structured information: birth dates, descriptions, relationships, classifications.&lt;/p&gt;
&lt;p&gt;The information in knowledge panels comes primarily from Wikidata and Wikipedia, with some additional sourcing from across the web. Google did not build this knowledge base; it aggregated it. But by presenting it in a structured panel attached to its own brand, Google takes implicit responsibility for its accuracy.&lt;/p&gt;
&lt;p&gt;Knowledge panels are wrong with surprising regularity. They misidentify people&#39;s occupations, assign incorrect birth dates, describe companies inaccurately, and sometimes attribute quotes, affiliations, or characteristics to people who have none of those things. Corrections are difficult to obtain. The process for requesting changes to a knowledge panel is opaque, often unresponsive, and in many cases requires the subject to prove their own identity to a tech company&#39;s satisfaction.&lt;/p&gt;
&lt;p&gt;For private individuals and small organizations, an inaccurate knowledge panel can be professionally damaging and nearly impossible to fix. The power asymmetry is stark: Google defines you to anyone who searches your name, and your ability to contest that definition is minimal.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;search-as-a-market&quot; tabindex=&quot;-1&quot;&gt;Search as a Market&lt;/h2&gt;
&lt;p&gt;Google&#39;s search business is an advertising business. The company generates the majority of its revenue by selling placement — companies pay to appear in results when users search for relevant terms.&lt;/p&gt;
&lt;p&gt;This creates a conflict that Google manages through structural separation: paid results are labeled, organic results are not supposed to be influenced by advertising relationships. In principle, the editorial and commercial functions are separate.&lt;/p&gt;
&lt;p&gt;In practice, the line is less clear. Google&#39;s properties — YouTube, Maps, Shopping, Flights, Hotels — consistently appear prominently in search results for relevant queries. When a user searches for a restaurant, Google Maps appears above organic results from restaurant review sites. When a user searches for a product, Google Shopping results appear before links to retailers&#39; own pages. Google argues this is because its products are the best answer to the query. Critics argue it is vertical integration that uses the search monopoly to advantage other Google businesses.&lt;/p&gt;
&lt;p&gt;Regulators in multiple jurisdictions have investigated and in some cases ruled against Google on exactly these grounds. The argument that a dominant search engine can simultaneously be a neutral discovery tool and a platform for its own commercial properties has not survived scrutiny in every legal system that has examined it.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-quality-rater-problem&quot; tabindex=&quot;-1&quot;&gt;The Quality Rater Problem&lt;/h2&gt;
&lt;p&gt;Google employs tens of thousands of human &amp;quot;quality raters&amp;quot; — contractors who evaluate search results according to guidelines Google provides. These ratings feed into the training of Google&#39;s ranking algorithms. The raters do not directly change results; they provide signal that shapes how the machine learns.&lt;/p&gt;
&lt;p&gt;Google publishes its quality rater guidelines, which run to hundreds of pages. They define concepts like &amp;quot;expertise, authoritativeness, and trustworthiness&amp;quot; (E-A-T, now E-E-A-T) that the algorithm is supposed to reward. These guidelines encode real judgments about epistemology: what counts as an expert, what kind of evidence is authoritative, which institutions should be trusted.&lt;/p&gt;
&lt;p&gt;The guidelines lean heavily on credentials and institutional affiliation. A medical claim from a licensed physician is treated as more reliable than the same claim from an uncredentialed source. This is a reasonable heuristic in many cases. It is also a heuristic that systematically advantages established institutions and disadvantages dissenting views — including cases where established institutions have been wrong and dissenters have been right.&lt;/p&gt;
&lt;p&gt;The quality rater system gives Google enormous influence over what kind of knowledge counts as legitimate. That influence is exercised through guidelines written by a private company, applied by contractors under nondisclosure agreements, and used to train systems that the public cannot examine.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-changes-when-you-know-this&quot; tabindex=&quot;-1&quot;&gt;What Changes When You Know This&lt;/h2&gt;
&lt;p&gt;None of this means Google is useless, or that its results are systematically wrong, or that alternative search engines are more trustworthy. Google&#39;s search product is technically sophisticated and often genuinely useful. The problem is not that it fails constantly — it is that its authority is treated as more absolute than it deserves to be.&lt;/p&gt;
&lt;p&gt;A few things follow from understanding how the system works.&lt;/p&gt;
&lt;p&gt;The position of a result does not indicate its accuracy. Google&#39;s ranking rewards signals that correlate with quality — links, engagement, site structure, institutional affiliation — but correlation is not equivalence. A top-ranked page can be wrong. A buried page can be right.&lt;/p&gt;
&lt;p&gt;Featured snippets and knowledge panels are machine-generated extractions, not verified facts. They should be treated as a starting point for investigation, not as conclusions. For anything consequential — medical questions, legal situations, financial decisions — following the link and reading the source is not optional.&lt;/p&gt;
&lt;p&gt;What Google does not show is as important as what it does show. Search results are a curated sample of available information. The curation decisions are opaque, influenced by commercial interests, and reflect assumptions about authority that are not universally shared. Searching multiple engines, going directly to primary sources, and using specialized databases for specialized questions are not paranoid behaviors — they are basic information hygiene.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-infrastructure-problem&quot; tabindex=&quot;-1&quot;&gt;The Infrastructure Problem&lt;/h2&gt;
&lt;p&gt;The deeper issue is not any specific failure of Google&#39;s search product. It is the structural position Google occupies.&lt;/p&gt;
&lt;p&gt;When a single private company mediates most of the world&#39;s access to information, the editorial choices embedded in its systems have civilizational consequences. Google&#39;s decisions about what counts as authoritative, what gets suppressed, and what gets presented as uncontested fact shape what populations believe to be true — about health, about politics, about history, about each other.&lt;/p&gt;
&lt;p&gt;This is not the kind of power that is compatible with the level of transparency and accountability Google currently provides. A company whose algorithm shapes public epistemology should be subject to meaningful external scrutiny. It is not. Its systems are trade secrets. Its quality rater guidelines are the extent of its public disclosure. Its decisions about suppression and promotion are not subject to any meaningful democratic oversight.&lt;/p&gt;
&lt;p&gt;That is the real problem. Not that Google gets things wrong sometimes — every information system does. But that Google gets to define what &amp;quot;right&amp;quot; looks like, at global scale, in secret, and without meaningful recourse for those affected by its judgments.&lt;/p&gt;
&lt;p&gt;The search box feels like a window. It is, more accurately, a mirror — reflecting back a version of reality that a single company, with its own interests and assumptions, has decided you should see.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>The Breach Nobody Talked About</title>
    <link href="https://redposts.com/posts/breach-nobody-talked-about/" />
    <updated>2026-03-16T00:00:00Z</updated>
    <id>https://redposts.com/posts/breach-nobody-talked-about/</id>
    <content type="html">&lt;p&gt;In 2025, the United States recorded 3,322 reported data breaches — a record high, representing a 4% increase over the previous year. That works out to roughly nine breaches every single day.&lt;/p&gt;
&lt;p&gt;You probably heard about a handful of them.&lt;/p&gt;
&lt;p&gt;The gap between what actually happens and what makes the news is not a minor discrepancy. It is the norm. The breaches that get coverage are the ones with dramatic numbers or recognizable names. The thousands of others — affecting hospitals, local governments, insurance companies, logistics firms, and payroll processors — move through the system quietly, noticed only by the people whose data was taken.&lt;/p&gt;
&lt;p&gt;This is a look at what 2025 actually looked like, why most of it stayed invisible, and what the pattern tells us about how data security works in practice.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-actually-happened-in-2025&quot; tabindex=&quot;-1&quot;&gt;What Actually Happened in 2025&lt;/h2&gt;
&lt;p&gt;The scale is difficult to absorb. According to &lt;a href=&quot;https://www.ibm.com/reports/data-breach&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;IBM&#39;s 2025 Cost of a Data Breach Report&lt;/a&gt;, the global average cost of a data breach reached $4.44 million — while US breaches hit a record $10.22 million, a 9% increase year over year, driven by regulatory penalties and slower detection times. That figure covers detection, containment, notification, legal exposure, and reputational damage. It does not cover what happens to the individuals whose data was taken.&lt;/p&gt;
&lt;p&gt;The largest single incident of the year involved over 16 billion leaked credentials — usernames and passwords — from platforms including Google, Apple, and Facebook. To put that in context, there are approximately 5.5 billion internet users globally. This was not a breach of one company&#39;s systems. It was a compiled leak aggregating credentials from hundreds of previous breaches, surfaced in one massive dataset that circulated among cybercriminals.&lt;/p&gt;
&lt;p&gt;Healthcare was the hardest-hit sector. &lt;a href=&quot;https://www.healthcaredive.com/news/yale-new-haven-health-data-breach-5-6-million/746236/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Yale New Haven Health disclosed&lt;/a&gt; that 5.56 million patients had been affected by a breach detected on March 8, 2025 — the largest healthcare breach reported to federal regulators that year. Anne Arundel Dermatology saw the personal information of nearly 1.9 million individuals compromised. A ransomware attack on Union County, Ohio, exposed the Social Security numbers, financial information, and medical details of more than 45,000 residents and employees.&lt;/p&gt;
&lt;p&gt;Financial services reported the greatest number of individual incidents at 739, followed by healthcare at 534 and professional services at 478.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-third-party-problem-nobody-wants-to-talk-about&quot; tabindex=&quot;-1&quot;&gt;The Third-Party Problem Nobody Wants to Talk About&lt;/h2&gt;
&lt;p&gt;The most consistent pattern across 2025&#39;s major breaches was not sophisticated hacking. It was third-party access.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hipaajournal.com/conduent-business-solutions-data-breach/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Conduent Business Services&lt;/a&gt;, a New Jersey-based business services provider, was breached between October 2024 and January 2025. The total number of affected individuals is still under investigation — confirmed figures have grown past 10 million, with Texas alone reporting over 15 million affected — because Conduent processes data on behalf of hundreds of clients. About 462,000 customers of Blue Cross Blue Shield of Montana had their details exposed through Conduent alone. Volvo Group North America disclosed in February 2026 that nearly 17,000 of its employees were also caught in the same breach, notified more than a year after the original intrusion.&lt;/p&gt;
&lt;p&gt;Coinbase confirmed an insider breach in February 2026 after a contractor improperly accessed customer data in December 2025. The contractor had visibility into names, email addresses, phone numbers, dates of birth, KYC verification details, wallet balances, and transaction histories of around 30 affected customers. The contractor no longer works with the firm.&lt;/p&gt;
&lt;p&gt;The pattern here is consistent with what security researchers have been saying for years: your data is only as safe as the least secure vendor your service provider uses. When you sign up for an insurance plan, a bank account, or a subscription service, you are implicitly trusting their entire supply chain of data processors.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;why-most-breaches-never-make-the-news&quot; tabindex=&quot;-1&quot;&gt;Why Most Breaches Never Make the News&lt;/h2&gt;
&lt;p&gt;The media coverage of data breaches follows a predictable formula. A breach needs either a large number — tens of millions of affected records — or a recognizable brand name to generate significant coverage. Everything below that threshold passes largely unnoticed outside specialist publications.&lt;/p&gt;
&lt;p&gt;This is not purely a media failure. It is partly structural. In the United States, breach notification laws vary by state and sector. Companies are generally required to notify affected individuals and relevant regulators, but the timing, format, and public disclosure requirements differ. Many breaches are disclosed quietly through letters to state attorneys general or filings with the Department of Health and Human Services — technically public, practically invisible.&lt;/p&gt;
&lt;p&gt;The result is a population that receives breach notification letters regularly — 80% of surveyed consumers received at least one in the past twelve months according to the Identity Theft Resource Center — but has little broader context for what those letters mean or what to do about them.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-the-numbers-actually-mean-for-you&quot; tabindex=&quot;-1&quot;&gt;What the Numbers Actually Mean for You&lt;/h2&gt;
&lt;p&gt;The most useful thing to understand about the 2025 breach landscape is what data was taken and why it matters more than the number of records.&lt;/p&gt;
&lt;p&gt;Two-thirds of reported breaches involved Social Security numbers. Unlike a compromised password, a Social Security number cannot be changed. Once exposed, it remains a vector for identity theft, fraudulent tax filings, and new account fraud indefinitely. Credit card numbers, by contrast, can be canceled and reissued — which is why they represent a smaller share of high-value targets.&lt;/p&gt;
&lt;p&gt;The practical implication is straightforward. If your data has been in a breach that involved Social Security numbers — and statistically, given the volume of healthcare and financial sector incidents, there is a reasonable chance it has — the relevant risk is not immediate. It is long-term and intermittent. Fraudsters acquire large datasets and use them months or years later, when scrutiny has faded.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-to-do-that-actually-helps&quot; tabindex=&quot;-1&quot;&gt;What to Do That Actually Helps&lt;/h2&gt;
&lt;p&gt;Most advice given after data breaches is either too vague to be useful or too late to matter. Here is what is actually worth doing:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Freeze your credit.&lt;/strong&gt; This is the single most effective action available to individuals following a breach involving personal identifiers. A credit freeze prevents new lines of credit from being opened in your name without your explicit action to unfreeze. It is free at all three major credit bureaus — Equifax, Experian, and TransUnion — and does not affect your existing accounts or credit score. It should be done proactively, not reactively.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Use a password manager.&lt;/strong&gt; The 16 billion credential leak was not primarily a result of sophisticated attacks. It was a consequence of credential reuse — people using the same password across multiple services, meaning one compromised account cascades into many. A password manager generates and stores unique passwords for every service, eliminating this risk.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Enable two-factor authentication — but not SMS-based.&lt;/strong&gt; Text message-based two-factor authentication is better than nothing, but it is vulnerable to SIM swapping attacks, where an attacker convinces a carrier to transfer your phone number to a new SIM. App-based authentication using tools like Google Authenticator or Authy is significantly more resistant.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Monitor for breach exposure.&lt;/strong&gt; Services like &lt;a href=&quot;https://haveibeenpwned.com&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Have I Been Pwned&lt;/a&gt; allow you to check whether your email address appears in known breach datasets and receive alerts when new breaches are added. It is free and takes less than a minute to set up.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Watch for healthcare fraud specifically.&lt;/strong&gt; Given the volume of healthcare breaches in 2025, monitoring for fraudulent medical billing is particularly relevant. Unexpected bills for services you did not receive, or letters from insurers for claims you did not make, are indicators worth investigating.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-bottom-line&quot; tabindex=&quot;-1&quot;&gt;The Bottom Line&lt;/h2&gt;
&lt;p&gt;Record breach numbers in 2025 did not translate into record public awareness. Most people know, abstractly, that data breaches happen constantly. Far fewer understand the specific patterns — which sectors are hit hardest, which types of data create the most durable risk, and what the realistic consequences look like over time.&lt;/p&gt;
&lt;p&gt;The gap between what is disclosed and what is understood is where the real damage happens. Breach notification letters get filed or discarded. The data circulates. The fraud arrives later, when the connection is harder to make.&lt;/p&gt;
&lt;p&gt;The record is not a warning about a future threat. It is a description of a system that is already failing, quietly, at scale.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>Billions of Passwords Are Being Stolen Right Now — And Most People Have No Idea How</title>
    <link href="https://redposts.com/posts/infostealer-malware-password-theft/" />
    <updated>2026-03-17T00:00:00Z</updated>
    <id>https://redposts.com/posts/infostealer-malware-password-theft/</id>
    <content type="html">&lt;p&gt;Last year, headlines announced that 16 billion passwords had been leaked from platforms including Google, Apple, and Facebook. The number was alarming. It was also misleading.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.bleepingcomputer.com/news/security/no-the-16-billion-credentials-leak-is-not-a-new-data-breach/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Security researchers at BleepingComputer quickly clarified&lt;/a&gt; that this was not a new breach of those platforms. It was a compiled dataset — a massive aggregation of credentials stolen over many years from hundreds of different services, assembled into one place and circulating among cybercriminals. None of the named platforms had been newly compromised.&lt;/p&gt;
&lt;p&gt;The clarification matters. Not because the threat is smaller than the headline suggested — it isn&#39;t — but because understanding what is actually happening is the only way to protect yourself against it.&lt;/p&gt;
&lt;p&gt;The real story is infostealer malware. And it is considerably more serious than a recycled credential dump.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-infostealer-malware-actually-is&quot; tabindex=&quot;-1&quot;&gt;What Infostealer Malware Actually Is&lt;/h2&gt;
&lt;p&gt;An infostealer is a category of malware with one purpose: to silently extract credentials, browser cookies, saved passwords, and authentication tokens from an infected device, then transmit everything to a remote server controlled by criminals — usually within minutes of infection, often without any visible sign that anything has happened.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.pentestpartners.com/security-blog/2025-the-year-of-the-infostealer/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;According to Pen Test Partners&lt;/a&gt;, in 2025 infostealers became the fastest-growing malware category, overtaking ransomware in deployment and spread. The most prevalent families — Lumma Stealer, RedLine, StealC, and Vidar — are all available as malware-as-a-service. Anyone can rent access for between $250 and $1,000 per month and receive a dashboard, automatic updates, and support. The U.S. Department of Justice and FBI, working with Microsoft, &lt;a href=&quot;https://www.bleepingcomputer.com/news/security/lumma-infostealer-malware-operation-disrupted-2-300-domains-seized/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;seized 2,300 domains associated with the Lumma Stealer operation in May 2025&lt;/a&gt;. Parts of the infrastructure survived the takedown and the operation resumed activity within weeks — underscoring how resilient this ecosystem has become.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;how-your-device-gets-infected&quot; tabindex=&quot;-1&quot;&gt;How Your Device Gets Infected&lt;/h2&gt;
&lt;p&gt;The infection methods that delivered the most infostealers in 2025 do not rely on sophisticated exploits. They rely on normal user behavior.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fake software downloads&lt;/strong&gt; — pirated software, free versions of paid tools, game cheats, and key generators are among the most common delivery mechanisms. The download appears to work as expected. The infostealer installs silently alongside it.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Malicious search advertisements&lt;/strong&gt; — attackers purchase search ads for popular software names, directing users to convincing lookalike download pages. The downloaded file installs the legitimate software and the infostealer simultaneously.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;ClickFix attacks&lt;/strong&gt; — a method that surged 517% in 2025. The user is shown a fake error message, CAPTCHA, or support page that instructs them to open a Terminal or PowerShell window and paste a command to &amp;quot;fix&amp;quot; a problem. The command installs the infostealer. The critical rule: no legitimate software, website, or support page will ever ask you to paste a command into a Terminal or PowerShell prompt. If any site asks you to do this — regardless of how official it looks — close it immediately.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fake software updates&lt;/strong&gt; — popups claiming a browser, media player, or system component needs updating, delivering malware instead.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.pentestpartners.com/security-blog/2025-the-year-of-the-infostealer/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Pen Test Partners &lt;/a&gt; documented a macOS attack in 2025 where attackers used Google Ads and lookalike domains to redirect users to a fake Homebrew installer page visually indistinguishable from the legitimate one. The command on the page used a forced-copy button concealing a malicious payload appended after the legitimate command — meaning the user never saw what they were actually running.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-gets-stolen-and-what-happens-next&quot; tabindex=&quot;-1&quot;&gt;What Gets Stolen and What Happens Next&lt;/h2&gt;
&lt;p&gt;Once an infostealer executes, it moves fast. A typical infostealer will harvest saved passwords from every browser on the device, session cookies that allow access to logged-in accounts without a password, cryptocurrency wallet details, saved credit card information, and credentials stored in unsecured password managers.&lt;/p&gt;
&lt;p&gt;The stolen data is compiled into what criminals call a &amp;quot;log&amp;quot; — a package containing everything extracted from one infected device — then sold on cybercrime markets. &lt;a href=&quot;https://www.kelacyber.com/blog/understanding-the-infostealer-epidemic/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;KELA&#39;s 2025 infostealer report&lt;/a&gt; found that 330 million credentials were stolen from 4.3 million infected devices in 2024 alone. The top three infostealer strains — Lumma, StealC, and RedLine — accounted for over 75% of infected machines.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.verizon.com/business/resources/reports/dbir/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Verizon&#39;s 2025 Data Breach Investigations Report&lt;/a&gt; found that credentials stolen by infostealers played a role in 54% of ransomware incidents — meaning that for more than half of ransomware attacks against organizations, the initial access came from credentials harvested by malware on an employee&#39;s device, often a personal computer used for work.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;why-session-cookies-are-more-dangerous-than-passwords&quot; tabindex=&quot;-1&quot;&gt;Why Session Cookies Are More Dangerous Than Passwords&lt;/h2&gt;
&lt;p&gt;Most people understand the risk of a stolen password. Fewer understand the risk of a stolen session cookie — and infostealers harvest both.&lt;/p&gt;
&lt;p&gt;When you log into a website, the server issues a session cookie to your browser. This cookie is what keeps you logged in — it proves to the server that you already authenticated. An infostealer that steals this cookie can use it to access your account from a different device without needing your password or your two-factor authentication code, because the session is already authenticated.&lt;/p&gt;
&lt;p&gt;This is why enabling two-factor authentication, while still important, is not a complete defence against infostealer infections. If the malware runs while you are already logged in, it can take the session token directly.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;who-is-most-at-risk&quot; tabindex=&quot;-1&quot;&gt;Who Is Most at Risk&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://www.kelacyber.com/blog/understanding-the-infostealer-epidemic/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;According to KELA&#39;s research&lt;/a&gt;, personal unshared computers are the most frequently infected category, representing 35.7% of all infostealer cases. The reason is consistent: personal devices typically lack the security controls that corporate IT enforces — endpoint detection, forced updates, multi-factor authentication, and monitoring.&lt;/p&gt;
&lt;p&gt;The risk extends beyond individuals. In today&#39;s hybrid work environment, personal computers routinely contain corporate credentials. Around 90% of organizations that were breached in 2024 had their credentials available for sale on dark web marketplaces for just $10 to $15 per account.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-to-do&quot; tabindex=&quot;-1&quot;&gt;What to Do&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Be specific about where you download software.&lt;/strong&gt; Download only from the official website of the developer, accessed directly — not through search results, not through links in emails or messages, not through third-party download aggregators. This eliminates the majority of infostealer delivery methods.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Do not follow paste-and-run instructions from websites.&lt;/strong&gt; No legitimate software installer requires you to open a terminal and paste a command. Any site or popup asking you to do this should be closed immediately.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Use a dedicated password manager, not your browser.&lt;/strong&gt; This distinction matters. Infostealers are specifically designed to dump passwords saved in browsers — Chrome, Edge, and Safari store credentials in locations that malware can access directly. A dedicated password manager like Bitwarden or 1Password encrypts its database separately, making it significantly harder for smash-and-grab malware to extract. Either way, using unique passwords for every service means a credential stolen from one account cannot be used elsewhere.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Enable two-factor authentication on critical accounts.&lt;/strong&gt; Two-factor authentication does not protect against session cookie theft, but it protects against password-only attacks and provides an alert when someone attempts access with a correct password.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;If you suspect infection, revoke all active sessions immediately.&lt;/strong&gt; Changing your password alone is not enough — stolen session cookies remain valid even after a password change. Go to the security settings of your important accounts (Google, Apple, email, banking) and use the option to &amp;quot;sign out of all devices&amp;quot; or &amp;quot;revoke all active sessions.&amp;quot; This invalidates any stolen cookies and forces every device to re-authenticate from scratch.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Keep software updated.&lt;/strong&gt; Infostealers increasingly exploit outdated browser extensions, media players, and system components. Keeping software current removes known vulnerabilities from the attack surface.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Check &lt;a href=&quot;https://haveibeenpwned.com&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Have I Been Pwned&lt;/a&gt;.&lt;/strong&gt; The service maintains a database of known breach datasets and will alert you if your email address appears in new compilations.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-bottom-line&quot; tabindex=&quot;-1&quot;&gt;The Bottom Line&lt;/h2&gt;
&lt;p&gt;The 16 billion password headline was not wrong — a dataset of that scale exists and circulates. But it was not a new breach. It was a symptom of a larger, ongoing problem: infostealers operating at industrial scale, infecting personal devices through convincing social engineering, and feeding a credential economy where your login details can be purchased for less than a coffee.&lt;/p&gt;
&lt;p&gt;In 2025, infostealers became the fastest-growing malware category, overtaking ransomware in deployment and spread. The attacks do not require sophisticated technology. They require a user to do one thing that looks routine — click a link, download a file, paste a command.&lt;/p&gt;
&lt;p&gt;Understanding how the infection happens is more useful than any specific number. The number changes. The method does not.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>The EU Is Trying to Break Encryption — And Most People Have No Idea</title>
    <link href="https://redposts.com/posts/eu-chat-control-encryption/" />
    <updated>2026-03-18T00:00:00Z</updated>
    <id>https://redposts.com/posts/eu-chat-control-encryption/</id>
    <content type="html">&lt;p&gt;Since 2022, the European Union has been debating a law that would require online platforms to scan private messages for illegal content. The proposal is formally called the Child Sexual Abuse Regulation. Critics call it &lt;a href=&quot;https://www.patrick-breyer.de/en/posts/chat-control/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Chat Control&lt;/a&gt;. After years of political deadlock, the legislation reached a critical turning point this month: on March 11, 2026, the European Parliament &lt;a href=&quot;https://euperspectives.eu/2026/03/meps-extend-chat-control-limit-scanning/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;voted 458 to 103&lt;/a&gt; to end the mass surveillance of private messages, adopting a position that any scanning must target only specific users identified by a judge — not entire populations indiscriminately. &lt;a href=&quot;https://www.eff.org/deeplinks/2025/12/after-years-controversy-eus-chat-control-nears-its-final-hurdle-what-know&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Trilogue negotiations&lt;/a&gt; between the EU Council, Parliament, and Commission are now underway under significant time pressure, with the current interim scanning regime set to expire on April 6, 2026.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Most people outside of digital rights circles have never heard of it.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That is a problem, because what is being decided in Brussels over the next few months has significant implications for the privacy of digital communications — not just in Europe, but potentially as a precedent for governments worldwide.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-the-law-proposes&quot; tabindex=&quot;-1&quot;&gt;What the Law Proposes&lt;/h2&gt;
&lt;p&gt;The stated purpose of the Child Sexual Abuse Regulation is to prevent and combat child sexual abuse material online. Nobody disputes that goal. The dispute is about the method.&lt;/p&gt;
&lt;p&gt;The core mechanism proposed is automated scanning of digital communications — messages, images, and video — to detect illegal content before or after it is transmitted. The controversy centers on how this scanning is technically supposed to work, particularly for services that use end-to-end encryption.&lt;/p&gt;
&lt;p&gt;End-to-end encryption means that a message is encrypted on the sender&#39;s device and can only be decrypted by the recipient. The service provider — WhatsApp, Signal, your email provider — cannot read the content. This is what distinguishes genuinely private communications from services that can be read by the platform.&lt;/p&gt;
&lt;p&gt;To scan end-to-end encrypted messages, the scanning has to happen on the device itself, before the message is encrypted. This technique is called client-side scanning. Security experts and the European Parliament&#39;s own researchers have been clear about the implication: once you install scanning software on a device that runs before encryption is applied, you have effectively broken the encryption guarantee. The message is no longer private in any meaningful sense. It can be read by whoever controls the scanning system.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;where-things-stand-now&quot; tabindex=&quot;-1&quot;&gt;Where Things Stand Now&lt;/h2&gt;
&lt;p&gt;The proposal has gone through several iterations since 2022. The most controversial versions would have made message scanning mandatory for all providers, including those using end-to-end encryption. After sustained opposition from privacy advocates, security researchers, and several EU member states — Germany was a consistent holdout — the Danish presidency of the EU Council revised the proposal in late 2025 to remove the explicit mandatory scanning requirement.&lt;/p&gt;
&lt;p&gt;Here is where things stand as of March 2026:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;November 26, 2025&lt;/strong&gt; — &lt;a href=&quot;https://www.techradar.com/vpn/vpn-privacy-security/chat-control-eu-lawmakers-finally-agree-on-the-voluntary-scanning-of-your-private-chats&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;EU Council endorses revised position&lt;/a&gt;, dropping mandatory scanning but preserving voluntary scanning&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;December 9, 2025&lt;/strong&gt; — First trilogue negotiation between Council, Parliament, and Commission&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;February 26, 2026&lt;/strong&gt; — Second trilogue session&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;March 11, 2026&lt;/strong&gt; — European Parliament votes 458-103 to end mass scanning, requiring judicial authorisation for any targeted scanning&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;April 6, 2026&lt;/strong&gt; — Interim regulation expires — a hard deadline creating immediate pressure on negotiators&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;May 4, 2026&lt;/strong&gt; — Third trilogue scheduled&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;June 29, 2026&lt;/strong&gt; — Fourth and expected final trilogue&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;July 2026&lt;/strong&gt; — Formal adoption anticipated&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The removal of mandatory scanning is widely described as a victory for privacy advocates. But the current version of the proposal is not without controversy.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;why-the-revised-version-still-raises-concerns&quot; tabindex=&quot;-1&quot;&gt;Why the Revised Version Still Raises Concerns&lt;/h2&gt;
&lt;p&gt;The Council&#39;s revised text dropped explicit mandatory detection orders but preserved voluntary scanning — meaning platforms can choose to scan messages that are not end-to-end encrypted. It also introduced mandatory age verification requirements and what critics describe as vaguely worded &amp;quot;risk mitigation&amp;quot; obligations for encrypted services. The word &amp;quot;voluntary&amp;quot; is doing significant work here: platforms that fail to demonstrate adequate risk mitigation face regulatory consequences, which creates strong pressure to scan even without a legal mandate. In practice, the line between voluntary and compulsory becomes difficult to distinguish.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.patrick-breyer.de/en/posts/chat-control/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Patrick Breyer&lt;/a&gt;, a German digital rights lawyer and former Member of the European Parliament who has tracked the legislation closely, has warned that the structure of the revised regulation could lead to mass surveillance without formally mandating it. His concern focuses on Article 5 of the Council&#39;s mandate, which requires providers to &amp;quot;contribute effectively&amp;quot; to detecting illegal content. For encrypted services, he argues, this wording creates pressure to weaken encryption in practice, even without an explicit legal requirement.&lt;/p&gt;
&lt;p&gt;The European Data Protection Supervisor and the European Data Protection Board have previously stated in a joint opinion that the proposal &amp;quot;could become the basis for de facto generalized and indiscriminate scanning of the content of virtually all types of electronic communications.&amp;quot;&lt;/p&gt;
&lt;p&gt;The European Court of Human Rights ruled in February 2024, in an unrelated case, that requiring degraded end-to-end encryption &amp;quot;cannot be regarded as necessary in a democratic society.&amp;quot;&lt;/p&gt;
&lt;p&gt;The European Commission&#39;s own implementation report, published in November 2025, acknowledged that there is no proven link between scanning private messages and actual convictions or children rescued. The report also noted that perpetrators can easily migrate to other platforms where no scanning takes place.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-technical-problem-that-cannot-be-argued-away&quot; tabindex=&quot;-1&quot;&gt;The Technical Problem That Cannot Be Argued Away&lt;/h2&gt;
&lt;p&gt;Beyond the legal debate, there is a technical reality that the regulation cannot resolve by political compromise.&lt;/p&gt;
&lt;p&gt;The European Parliament commissioned an independent impact assessment which concluded that there are currently no technological solutions capable of detecting child sexual abuse material without producing an unacceptably high rate of false positives. At scale — across billions of messages — even a very low false positive rate translates into millions of ordinary communications being flagged and reviewed.&lt;/p&gt;
&lt;p&gt;The &lt;a href=&quot;https://www.mpg.de/25788438/chat-control-eu-client-side-scanning&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Max Planck Institute for Security and Privacy&lt;/a&gt;, in an analysis of client-side scanning, noted that detection software &amp;quot;would be embedded in the messaging app or the operating system to scan chat content and automatically forward any material flagged as prohibited to law enforcement agencies.&amp;quot; Once content is accessible to a party other than the sender and recipient, the encryption protection is gone — regardless of what the law says about voluntary versus mandatory.&lt;/p&gt;
&lt;p&gt;Multiple intelligence agencies across EU member states, along with cybersecurity researchers, have warned against any regulation that weakens encryption. Their concern is not abstract: encryption protects financial transactions, medical records, legal communications, journalistic sources, and political dissidents, in addition to private personal communications.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;why-this-matters-beyond-europe&quot; tabindex=&quot;-1&quot;&gt;Why This Matters Beyond Europe&lt;/h2&gt;
&lt;p&gt;EU legislation tends to set standards that extend beyond EU borders. When the General Data Protection Regulation came into force in 2018, it reshaped data privacy practices globally because companies serving European users had to comply regardless of where they were based. A Chat Control regulation with teeth would create similar pressure.&lt;/p&gt;
&lt;p&gt;If the EU establishes a legal framework that normalizes scanning private communications — even framed as voluntary, even scoped to illegal content — it provides a template for other governments to follow. Authoritarian governments do not need new ideas about surveillance. But a democratic precedent makes the argument harder to resist.&lt;/p&gt;
&lt;p&gt;The European Parliament&#39;s March 11 vote is a significant development — it means Parliament enters the remaining trilogue sessions with a strong, clearly stated position against mass surveillance. But the final outcome depends on negotiations with EU governments, many of which have resisted restrictions on broader scanning. The Council&#39;s appetite for targeted-only scanning remains limited, and the Commission&#39;s position has not shifted. Whatever emerges from the June 29 final session could look very different from what Parliament approved.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-bottom-line&quot; tabindex=&quot;-1&quot;&gt;The Bottom Line&lt;/h2&gt;
&lt;p&gt;Chat Control is not a fringe proposal. It is active legislation, currently in final negotiations, with a scheduled conclusion date. The most aggressive version — mandatory scanning of all encrypted messages — has been scaled back. But the revised version still contains provisions that privacy experts, the EU&#39;s own data protection authorities, and independent security researchers have flagged as fundamentally incompatible with private communications.&lt;/p&gt;
&lt;p&gt;The argument on the other side is genuine: child sexual abuse material causes real harm, and detection and removal matters. The disagreement is not about the goal. It is about whether mass scanning of private communications is an effective, proportionate, or legally permissible way to achieve it — and whether, once the infrastructure exists, it stays limited to that purpose.&lt;/p&gt;
&lt;p&gt;Those are questions worth understanding before the law is passed.&lt;/p&gt;
&lt;p&gt;By the time most people hear about Chat Control, the decision will already have been made.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>23andMe Went Bankrupt. Here&#39;s What Happened to Your DNA.</title>
    <link href="https://redposts.com/posts/23andme-dna-bankruptcy/" />
    <updated>2026-03-21T00:00:00Z</updated>
    <id>https://redposts.com/posts/23andme-dna-bankruptcy/</id>
    <content type="html">&lt;p&gt;In March 2025, 23andMe &lt;a href=&quot;https://restructuring.ra.kroll.com/23andme/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;filed for bankruptcy&lt;/a&gt;. The sale of its genetic database closed in July. The legislation it triggered is still pending in Congress. The class action settlement deadline passed just weeks ago in February 2026.&lt;/p&gt;
&lt;p&gt;The story is not over — but most people still don&#39;t know how it started, what actually happened to the DNA data of 15 million customers, or why the ending is more complicated than it looks.&lt;/p&gt;
&lt;p&gt;23andMe built a business out of convincing people to mail in their saliva. When it failed, that saliva — and everything extracted from it — became an asset in a bankruptcy proceeding. Here is the full story.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;how-23andme-got-here&quot; tabindex=&quot;-1&quot;&gt;How 23andMe Got Here&lt;/h2&gt;
&lt;p&gt;23andMe launched in 2006 with a straightforward pitch: send us a saliva sample and we&#39;ll tell you about your ancestry and genetic health risks. The model worked — by the time the company went public in 2021, it was valued at $6 billion.&lt;/p&gt;
&lt;p&gt;The decline was slow, then fast. The company struggled to convert one-time DNA test customers into repeat subscribers. In November 2024, it laid off roughly 40% of its staff. Then, in October 2023, a hacker accessed accounts through credential stuffing — using passwords reused from other breaches — and scraped the personal information of approximately 6.9 million customers, including names, birth years, ancestry data, and DNA Relatives matches. The breach affected nearly half the company&#39;s user base.&lt;/p&gt;
&lt;p&gt;By March 2025, the company had assets worth $277 million and debts of $214 million. It filed for Chapter 11 bankruptcy and immediately announced a sale process.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;why-the-bankruptcy-was-different&quot; tabindex=&quot;-1&quot;&gt;Why the Bankruptcy Was Different&lt;/h2&gt;
&lt;p&gt;Data bankruptcy sales happen regularly. Retailers, platforms, and service companies go bankrupt all the time, and their customer data — email addresses, purchase histories, behavioral profiles — gets sold along with everything else. Most people never notice.&lt;/p&gt;
&lt;p&gt;The 23andMe case was different for one reason: the data being sold was DNA.&lt;/p&gt;
&lt;p&gt;Unlike an email address, which can be changed, or a credit card number, which can be cancelled, genetic data is permanent. Your genome cannot be reset. It contains information not just about you, but about your parents, your siblings, your children, and every biological relative you have — including people who never consented to be in the database at all. One family member&#39;s DNA test creates a partial genetic record of the entire family tree.&lt;/p&gt;
&lt;p&gt;DNA data cannot be easily reset or reissued. And, absent mutation, it is not changeable. Yet under current bankruptcy law, genetic data is treated no differently than marketing lists, brand assets, or software licenses.&lt;/p&gt;
&lt;p&gt;The California Attorney General Rob Bonta issued an &lt;a href=&quot;https://oag.ca.gov/news/press-releases/attorney-general-bonta-issues-consumer-alert-following-23andme-bankruptcy&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;urgent consumer alert&lt;/a&gt; advising customers to delete their data immediately. Traffic to 23andMe&#39;s website surged to the point that the login portal slowed and eventually went offline, overwhelmed by customers attempting to delete their accounts.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-the-privacy-policy-actually-said&quot; tabindex=&quot;-1&quot;&gt;What the Privacy Policy Actually Said&lt;/h2&gt;
&lt;p&gt;23andMe&#39;s privacy policy contained a clause that few customers had ever read:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;quot;If we are involved in a bankruptcy, merger, acquisition, reorganization, or sale of assets, your Personal Information may be accessed, sold or transferred as part of that transaction.&amp;quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This is standard language in most consumer privacy policies. It means that the protections customers believed they had — the implicit understanding that their DNA would be used for their own ancestry research and nothing else — were not actually guaranteed. The company had reserved the right to transfer that data in exactly the circumstances that were now unfolding.&lt;/p&gt;
&lt;p&gt;The privacy policy also says that the new company has to follow the existing privacy policy, which sounds great, but the existing privacy policy also says that it can be changed at any time.&lt;/p&gt;
&lt;p&gt;Federal law offered limited protection. Unlike health records, which are covered by HIPAA, 23andMe does business with you as a consumer, not as a patient. Customers do not get the federal health privacy protections that apply to data shared with a doctor.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-auction&quot; tabindex=&quot;-1&quot;&gt;The Auction&lt;/h2&gt;
&lt;p&gt;The bankruptcy court approved a sale process and set a June 2025 auction date. Two serious bidders emerged: Regeneron Pharmaceuticals, one of the world&#39;s largest biotech companies, and TTAM Research Institute, a nonprofit created specifically for this purpose by Anne Wojcicki — 23andMe&#39;s co-founder and former CEO.&lt;/p&gt;
&lt;p&gt;Regeneron initially won the auction with a bid of $256 million. Then the court reopened bidding after Wojcicki argued her group had been unfairly excluded. TTAM submitted a revised bid of $305 million, trumping Regeneron&#39;s offer. Regeneron declined to go higher.&lt;/p&gt;
&lt;p&gt;The court approved the sale of 23andMe&#39;s genetic data and personal information assets to TTAM Research Institute, an entity founded by Anne Wojcicki, the former CEO and co-founder of 23andMe. The sale closed on July 14, 2025.&lt;/p&gt;
&lt;p&gt;The twist: the company that had failed, the founder who had presided over that failure, and the data that had been at the center of the privacy crisis, all ended up in essentially the same hands — repackaged as a nonprofit.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-nonprofit-problem&quot; tabindex=&quot;-1&quot;&gt;The Nonprofit Problem&lt;/h2&gt;
&lt;p&gt;TTAM Research Institute is a California nonprofit public benefit corporation. Its name shares initials with 23andMe. Its leadership is composed of former 23andMe executives. Its stated mission is scientific and biomedical research using the genetic database.&lt;/p&gt;
&lt;p&gt;The newly formed &amp;quot;nonprofit,&amp;quot; TTAM Research Institute, uses the same initials as 23andMe and, according to the bankruptcy judge, is composed of &amp;quot;the same business, the same employees, familiar leaders, and the same privacy policies.&amp;quot;&lt;/p&gt;
&lt;p&gt;Critics, including Public Citizen and attorneys general from dozens of states, argued this structure allowed a failed for-profit company to shed its debts, rebrand as a nonprofit, and reacquire its most valuable asset — the genetic data of 15 million people — without meaningful regulatory accountability.&lt;/p&gt;
&lt;p&gt;TTAM committed to several protections as conditions of the sale: a Consumer Privacy Advisory Board within 90 days of closing, annual privacy reports available to state attorneys general, two years of free identity theft monitoring for customers, and a restriction on selling or transferring genetic data in any subsequent bankruptcy unless the recipient is a qualified domestic entity that adopts TTAM&#39;s privacy policies.&lt;/p&gt;
&lt;p&gt;Whether those commitments hold over time depends on the nonprofit&#39;s leadership — which is, for now, the same people who led the for-profit version.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-the-law-still-doesn&#39;t-cover&quot; tabindex=&quot;-1&quot;&gt;What the Law Still Doesn&#39;t Cover&lt;/h2&gt;
&lt;p&gt;The 23andMe case exposed a specific gap in bankruptcy law. Section 363(b)(1)(B) of the Bankruptcy Code offers some protections for personally identifiable information, but the law fails to expressly include genetic data. This leaves consumers vulnerable to the permanent transfer of their biological identity without meaningful consent.&lt;/p&gt;
&lt;p&gt;In response, a bipartisan group of senators introduced the Don&#39;t Sell My DNA Act, which would amend the Bankruptcy Code to require written notice and affirmative opt-in consent from consumers before any transfer of genetic data in bankruptcy proceedings. The bill has a House companion. It remains pending as of early 2026.&lt;/p&gt;
&lt;p&gt;Several states — including New York, Oregon, and Virginia — have their own genetic privacy laws requiring specific consent before genetic data can be disclosed to third parties. The legal question of whether a bankruptcy sale constitutes a disclosure to a &amp;quot;third party&amp;quot; under those statutes was never fully resolved.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-you-can-still-do&quot; tabindex=&quot;-1&quot;&gt;What You Can Still Do&lt;/h2&gt;
&lt;p&gt;If you are a 23andMe customer and have not already deleted your data, you can still do so. Log into your account, go to Settings, and navigate to the data deletion option. In California, the Genetic Information Privacy Act (GIPA) gives residents the right to force deletion of both their account data and their physical saliva sample.&lt;/p&gt;
&lt;p&gt;Deleting your account removes your profile from the database going forward. It does not retroactively undo any research already conducted using your de-identified data, nor does it affect the breach data already in circulation from the 2023 incident.&lt;/p&gt;
&lt;p&gt;If you were a 23andMe customer between May 2023 and October 2023 and received a breach notification, you may have been eligible for a class action settlement. The claims deadline passed in February 2026.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-broader-lesson&quot; tabindex=&quot;-1&quot;&gt;The Broader Lesson&lt;/h2&gt;
&lt;p&gt;The 23andMe bankruptcy is not primarily a story about one company&#39;s failure. It is a story about the gap between what people believe they are consenting to and what they are actually agreeing to.&lt;/p&gt;
&lt;p&gt;When 15 million people mailed in a saliva sample, they were thinking about ancestry percentages and health risk factors. They were not thinking about bankruptcy law, or the legal definition of personally identifiable information, or what happens to their children&#39;s partially-encoded genetic data when a company&#39;s share price falls to $1.27.&lt;/p&gt;
&lt;p&gt;The data they created is permanent. The company that held it is gone in its original form. The legal framework that was supposed to protect it was not built for this.&lt;/p&gt;
&lt;p&gt;That gap still exists. The Don&#39;t Sell My DNA Act would close part of it. Until it passes — if it passes — the terms of service you click through when you sign up for any consumer genetics service contain the same language 23andMe&#39;s did.&lt;/p&gt;
&lt;p&gt;It may be worth reading them.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>Phishing Emails Used to Be Easy to Spot. AI Changed That.</title>
    <link href="https://redposts.com/posts/ai-phishing-emails/" />
    <updated>2026-03-27T00:00:00Z</updated>
    <id>https://redposts.com/posts/ai-phishing-emails/</id>
    <content type="html">&lt;p&gt;There used to be a reliable way to spot a phishing email. Bad grammar. Misspelled words. A Nigerian prince. A logo that looked slightly off. Security trainers built entire curricula around these tells, and for years, they worked.&lt;/p&gt;
&lt;p&gt;That era is over.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://zensec.co.uk/blog/2025-phishing-statistics-the-alarming-rise-in-attacks/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;82.6% of phishing emails&lt;/a&gt; detected between September 2024 and February 2025 used AI — a 53.5% year-on-year increase. The emails no longer contain spelling mistakes. They reference real details. They match the writing style of the person or company they&#39;re impersonating. And &lt;a href=&quot;https://www.vectra.ai/topics/ai-scams&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;AI-generated phishing emails achieve click-through rates more than four times higher&lt;/a&gt; than their human-crafted counterparts.&lt;/p&gt;
&lt;p&gt;The problem is not that people are careless. The problem is that the tools for detecting fake emails were built for a threat that no longer exists.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-ai-actually-changed&quot; tabindex=&quot;-1&quot;&gt;What AI Actually Changed&lt;/h2&gt;
&lt;p&gt;Traditional phishing worked on volume. Send ten million poorly written emails, hope that a fraction of a percent of recipients click. The emails were cheap to produce, obviously fake to anyone paying attention, and caught by most spam filters.&lt;/p&gt;
&lt;p&gt;AI flipped this model.&lt;/p&gt;
&lt;p&gt;While a human attacker might spend 30 minutes crafting a single spear-phishing email, AI tools generate hundreds of contextually unique variations in the same timeframe. Each email can be personalized to the recipient — referencing their name, their employer, their recent activity, or their role — without any additional effort from the attacker.&lt;/p&gt;
&lt;p&gt;The results are measurable. In a 2024 benchmark study by Brightside AI, AI-crafted phishing emails achieved 54% click rates, compared to just 12% for human-written ones. That is not a marginal improvement. It is a fundamental shift in how effective these attacks are.&lt;/p&gt;
&lt;p&gt;The grammar and spelling tells are gone. Modern language models can replicate a company&#39;s style of communication, making impersonation attacks significantly harder to detect by appearance alone.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-new-attacks-you-haven&#39;t-heard-of&quot; tabindex=&quot;-1&quot;&gt;The New Attacks You Haven&#39;t Heard Of&lt;/h2&gt;
&lt;p&gt;Beyond better email writing, AI has enabled attack types that didn&#39;t meaningfully exist a few years ago.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Voice cloning&lt;/strong&gt; — Attackers use AI to clone the voice of someone the target knows — a manager, a colleague, a family member — and call them with instructions. &lt;a href=&quot;https://www.vectra.ai/topics/ai-scams&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Voice cloning has crossed the &amp;quot;indistinguishable threshold,&amp;quot;&lt;/a&gt; meaning human listeners can no longer reliably distinguish cloned voices from authentic ones. An employee receiving a call that sounds exactly like their manager asking them to urgently reset a password or approve a transfer has no technical way to verify it&#39;s fake.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Deepfake video calls&lt;/strong&gt; — The same principle applied to video. &lt;a href=&quot;https://www.vectra.ai/topics/ai-scams&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;A single deepfake video call cost engineering firm Arup $25.6 million.&lt;/a&gt; Employees on a video call with what appeared to be real colleagues approved a fraudulent transaction. The colleagues were AI-generated in real time.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Hyper-personalized spear phishing&lt;/strong&gt; — AI enables targeted attacks that reference specific organizational details. One documented campaign targeted 800 accounting firms with AI-generated emails referencing specific state registration details, achieving a 27% click rate — far above the industry average.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;QR code phishing&lt;/strong&gt; — Nearly one in four campaigns used QR codes or malicious links disguised as MFA prompts. A QR code in an email bypasses most link-scanning tools because the malicious URL is embedded in an image, not text.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workflow impersonation&lt;/strong&gt; — &lt;a href=&quot;https://bolster.ai/blog/2026-phishing-stats&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Researchers identified 29,183 unique phishing domains&lt;/a&gt; using e-signature and document approval-themed lures. The attack looks like a routine document requiring a signature — the kind of email that arrives dozens of times a day in most workplaces.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;why-the-old-advice-doesn&#39;t-work-anymore&quot; tabindex=&quot;-1&quot;&gt;Why the Old Advice Doesn&#39;t Work Anymore&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;&amp;quot;Look for typos&amp;quot;&lt;/strong&gt; — obsolete. AI writes better than most humans.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&amp;quot;Check if the sender looks suspicious&amp;quot;&lt;/strong&gt; — insufficient. Display names are trivially spoofed and look identical to legitimate senders in most email clients.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&amp;quot;Hover over the link to check the URL&amp;quot;&lt;/strong&gt; — increasingly unreliable. URL redirection was used in 48% of phishing links, up from 39% a year earlier. The URL you see when hovering may be a legitimate redirect service masking the final malicious destination.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&amp;quot;If it has the company logo, it&#39;s probably real&amp;quot;&lt;/strong&gt; — wrong. AI tools create hundreds of fraudulent websites using the logo, presentation style, and colors of real brands, with user experiences often indistinguishable from legitimate ones.&lt;/p&gt;
&lt;p&gt;The old mental checklist was calibrated for a specific type of attack. That attack has been replaced by something that looks nothing like it.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-actually-works-now&quot; tabindex=&quot;-1&quot;&gt;What Actually Works Now&lt;/h2&gt;
&lt;p&gt;The shift required is from asking &amp;quot;does this look fake?&amp;quot; to asking &amp;quot;should I be doing this at all?&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Verify requests through a separate channel.&lt;/strong&gt; If you receive an email asking you to do something — approve a payment, reset a password, share credentials, click a link — verify the request through a different communication channel before acting. Call the person on a known number. Send a separate message. Don&#39;t reply to the email itself or use contact information provided in it.&lt;/p&gt;
&lt;p&gt;This single habit defeats the majority of AI phishing attacks because they rely on you acting within the communication channel they control. A phone call to a known number breaks that chain entirely.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Be skeptical of urgency.&lt;/strong&gt; Attackers study timing and behavioral patterns to craft messages that trigger responses. Legitimate requests — from your bank, your employer, your colleagues — can almost always wait for verification. If a message demands immediate action and creates a sense of panic, that is a signal to slow down, not speed up.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Use phishing-resistant authentication.&lt;/strong&gt; Standard two-factor authentication using SMS codes or authenticator apps protects against password theft but not against real-time phishing attacks that capture your code as you enter it. &lt;a href=&quot;https://www.captaindns.com/en/blog/phishing-trends-2025-2026-statistics&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;FIDO2 hardware keys are the only effective protection against these attacks.&lt;/a&gt; Unlike SMS or TOTP codes, FIDO2 keys are domain-bound — they refuse to authenticate on a proxy site that spoofs the legitimate domain.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Check the actual domain, not the display name.&lt;/strong&gt; In your email client, click on the sender&#39;s name to expand and see the actual email address. Look specifically at the domain — the part after the @ symbol. A display name can say anything. The domain is harder to fake convincingly, though lookalike characters make even this imperfect.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Be especially careful with QR codes in emails.&lt;/strong&gt; A QR code in an email from an unknown sender or unexpected source should be treated with the same skepticism as a suspicious link. QR codes are harder to preview than URLs and increasingly used precisely because most people don&#39;t think to question them.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-scale-of-the-problem&quot; tabindex=&quot;-1&quot;&gt;The Scale of the Problem&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://www.captaindns.com/en/blog/phishing-trends-2025-2026-statistics&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;3.4 billion phishing emails are sent every day.&lt;/a&gt; 91% of cyberattacks start with an email.&lt;/p&gt;
&lt;p&gt;Phishing remains one of the most devastatingly expensive breach vectors. According to &lt;a href=&quot;https://www.ibm.com/reports/data-breach&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;IBM&#39;s Cost of a Data Breach Report 2025&lt;/a&gt;, the global average cost of a data breach reached $4.44 million. &lt;a href=&quot;https://www.verizon.com/business/resources/reports/dbir/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Verizon&#39;s 2025 Data Breach Investigations Report&lt;/a&gt; found that approximately 60% of breaches involved a human element — heavily driven by social engineering and phishing.&lt;/p&gt;
&lt;p&gt;These numbers describe a threat that is getting worse, not better, as AI tools become cheaper and more accessible. &lt;a href=&quot;https://www.vectra.ai/topics/ai-scams&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;AI-enabled fraud surged 1,210% in 2025,&lt;/a&gt; with projected losses reaching $40 billion by 2027.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-bottom-line&quot; tabindex=&quot;-1&quot;&gt;The Bottom Line&lt;/h2&gt;
&lt;p&gt;AI did not invent phishing. It industrialized it. The volume is higher, the targeting is more precise, the emails are more convincing, and the delivery channels have expanded beyond email to voice calls, video calls, and messaging apps.&lt;/p&gt;
&lt;p&gt;The old defense — learn to spot the tells — was always a patch on a systemic problem. It worked when the tells were obvious. They no longer are.&lt;/p&gt;
&lt;p&gt;What works now is behavioral: verify unexpected requests through a separate channel, be skeptical of urgency, use strong authentication on accounts that matter, and treat QR codes in emails with the same caution as suspicious links.&lt;/p&gt;
&lt;p&gt;The emails look real. The voices sound real. The faces on video calls may not be real. The question to ask is not whether a communication looks legitimate — it&#39;s whether you should act on it at all before independently verifying who sent it.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>Beyond the Login: The Mechanics of Session Cookie Theft</title>
    <link href="https://redposts.com/posts/session-cookie-theft/" />
    <updated>2026-03-31T00:00:00Z</updated>
    <id>https://redposts.com/posts/session-cookie-theft/</id>
    <content type="html">&lt;p&gt;In February 2024, a finance employee at the engineering firm Arup received a video call. On the call were his CFO and several colleagues. They discussed an urgent, confidential transaction. He transferred $25.6 million. Every person on that call was AI-generated. But the deepfakes were not the attack — they were the final step. Before any of that happened, the attacker already had access to the accounts they needed. That access did not come from breaking a password or bypassing MFA. It came from something stolen silently, weeks earlier, from a compromised machine.&lt;/p&gt;
&lt;p&gt;This is not a story about deepfakes. It is a story about a gap in how authentication actually works — one that most organizations have not closed, and that most individuals do not know exists.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-gap-mfa-does-not-cover&quot; tabindex=&quot;-1&quot;&gt;The Gap MFA Does Not Cover&lt;/h2&gt;
&lt;p&gt;Multi-factor authentication has become the default defense against unauthorized account access. Enable MFA, and even if your password is stolen, an attacker cannot log in without the second factor. This is true. It is also incomplete.&lt;/p&gt;
&lt;p&gt;MFA secures the login event. It does not secure the session.&lt;/p&gt;
&lt;p&gt;When you successfully authenticate — password entered, second factor verified — the server issues a session token. Think of it as a temporary badge: it proves you already cleared security, so you do not have to do it again with every page load. Your browser stores this token and presents it automatically with every subsequent request. The server trusts it completely.&lt;/p&gt;
&lt;p&gt;Steal the token, and you are inside the account as if you authenticated yourself. No password prompt. No MFA prompt. The authentication already happened — to the server, you are the legitimate user. This is the gap. And it is exactly what infostealer malware is built to exploit.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;where-the-keys-are-kept&quot; tabindex=&quot;-1&quot;&gt;Where the Keys Are Kept&lt;/h2&gt;
&lt;p&gt;This applies specifically to Chromium-based browsers on Windows — Chrome, Edge, and Brave, collectively used by over 70% of desktop users. On Windows, session tokens are stored at:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;%LocalAppData%&#92;Google&#92;Chrome&#92;User Data&#92;Default&#92;Network&#92;Cookies
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is a standard SQLite database. The tokens inside are encrypted, which sounds reassuring — but the encryption is a three-layer system, and each layer depends on the one before it:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;The master key.&lt;/strong&gt; Chromium generates a randomized AES-256 encryption key and stores it in a file called &lt;code&gt;Local State&lt;/code&gt; in the same directory.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;DPAPI locks the master key.&lt;/strong&gt; That AES key is itself encrypted using Windows DPAPI — the Data Protection API — which ties decryption to the active Windows user account.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;The cookies are encrypted with the master key.&lt;/strong&gt; Each session token in the database is encrypted using the AES key from step one.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The design is solid — unless an attacker already has code running as you. Any process operating under your Windows login can call the same DPAPI function your browser uses. Windows will comply, because as far as the operating system is concerned, you asked. It has no way to distinguish your browser making that request from malware making the same one.&lt;/p&gt;
&lt;p&gt;Once malware is on your machine, the rest of the sequence is mechanical:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Malware executes under your user account.&lt;/strong&gt; The most common delivery method is a file that looked legitimate — pirated software, a fake browser update, a &amp;quot;fix&amp;quot; you were instructed to paste into a terminal. Once it runs as you, it has everything it needs.&lt;/li&gt;
&lt;li&gt;It calls the DPAPI function to retrieve the master decryption key. The OS hands it over.&lt;/li&gt;
&lt;li&gt;It queries the cookie database for high-value session tokens — GitHub, AWS, banking portals, corporate SSO.&lt;/li&gt;
&lt;li&gt;It decrypts each token and packages everything into a compressed archive.&lt;/li&gt;
&lt;li&gt;It sends the archive to a remote server.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The entire process typically takes under 60 seconds. There is no visible sign anything happened.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-malware-families-doing-this-at-scale&quot; tabindex=&quot;-1&quot;&gt;The Malware Families Doing This at Scale&lt;/h2&gt;
&lt;p&gt;Infostealer malware is sold as a service — criminal groups rent access to the tools, keep the infrastructure running, and take a cut of what their customers steal. Three families dominate the market.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;RedLine&lt;/strong&gt; is the most widely deployed. It communicates over standard HTTP and HTTPS — making its traffic look identical to normal web browsing. It sweeps comprehensively: browser credentials, crypto wallet files, VPN configurations, and session tokens across all installed browsers.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;LummaC2&lt;/strong&gt; is built for evasion. It resolves its function calls dynamically at runtime, which means automated analysis tools that look for known patterns often miss it entirely. In May 2025, the DOJ and Microsoft seized over 2,300 Lumma-associated domains in a coordinated takedown. The infrastructure was operational again within weeks. A government-level disruption bought less than a month of downtime.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Vidar&lt;/strong&gt; retrieves its command server address dynamically from the bio sections of fake Telegram and Mastodon profiles — regular-looking social media accounts where an IP address is embedded in the profile text and rotated regularly. Standard IP blocklists cannot keep up with infrastructure that updates itself through public social media.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-happens-after-your-tokens-are-stolen&quot; tabindex=&quot;-1&quot;&gt;What Happens After Your Tokens Are Stolen&lt;/h2&gt;
&lt;p&gt;Once session tokens are exfiltrated, attackers import them into specialized tools called anti-detect browsers — software built specifically to impersonate another person&#39;s machine. These replicate the exact fingerprint of the victim&#39;s device: browser version, screen resolution, installed fonts, hardware identifiers. From the server&#39;s perspective, the incoming request is indistinguishable from the legitimate user.&lt;/p&gt;
&lt;p&gt;There is no password challenge. There is no MFA prompt. The session token is a bearer credential — whoever holds it, the server trusts. The attacker is inside the account.&lt;/p&gt;
&lt;p&gt;This is the mechanism behind the Arup attack. The deepfake video call was the social engineering layer, designed to authorize a specific transaction. The account access that made it possible was already established before the call began.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-to-do-about-it&quot; tabindex=&quot;-1&quot;&gt;What To Do About It&lt;/h2&gt;
&lt;p&gt;The most effective individual protection is the simplest: log out of sensitive accounts — banking, email, work portals — when you are done using them. A session token that has been invalidated is worthless to an attacker who steals it. Most people leave sessions open indefinitely because it is convenient. That convenience is what this attack exploits.&lt;/p&gt;
&lt;p&gt;The primary infection vector is software installation. Infostealer malware does not typically arrive through sophisticated browser exploits — it arrives because someone installed something. A cracked application. A fake update. A file from an unofficial source. The malware on the machine is the prerequisite for everything that follows. Download software only from the official source of the developer.&lt;/p&gt;
&lt;p&gt;If you suspect your machine has been compromised, changing your password is not enough. Go to the security settings of your important accounts and revoke all active sessions — typically labeled &amp;quot;sign out of all devices&amp;quot; or &amp;quot;manage active sessions.&amp;quot; This invalidates any tokens already stolen, forcing reauthentication on every device. Do this before changing the password, not after.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-long-term-fix&quot; tabindex=&quot;-1&quot;&gt;The Long-Term Fix&lt;/h2&gt;
&lt;p&gt;The individual mitigations are patches on a deeper architectural problem. Session tokens are bearer credentials — they grant access to whoever holds them, with no verification of the device presenting them. The machine that originally authenticated is not part of the equation once the token exists.&lt;/p&gt;
&lt;p&gt;The permanent solution is &lt;a href=&quot;https://www.w3.org/TR/dbsc/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Device Bound Session Credentials (DBSC)&lt;/a&gt; — a standard that binds session tokens cryptographically to the hardware of the originating device. A stolen token becomes useless on any other machine because the cryptographic proof of device identity cannot be replicated. DBSC is now available in Chrome 145 on Windows, with broader platform support still in progress.&lt;/p&gt;
&lt;p&gt;Until that standard ships and achieves broad adoption, the gap remains. Log out when you are done. Only install software from sources you trust. Know that MFA, for all its value, does not cover the session that begins the moment login succeeds.&lt;/p&gt;
&lt;p&gt;The login is secure. Everything after it is not.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>The 60-Megabyte Mistake: How Anthropic Shipped Its Own Source Code to the World</title>
    <link href="https://redposts.com/posts/claude-code-npm-incident/" />
    <updated>2026-04-03T00:00:00Z</updated>
    <id>https://redposts.com/posts/claude-code-npm-incident/</id>
    <content type="html">&lt;p&gt;On March 31, 2026, at approximately 4:00 AM UTC, Anthropic pushed version 2.1.88 of Claude Code to the public npm registry — the package manager used by hundreds of millions of JavaScript developers worldwide. Bundled inside the package was a 59.8-megabyte file that was never supposed to be there: a complete, unobfuscated map of the entire Claude Code codebase. 512,000 lines of TypeScript. 1,906 source files. The full internal architecture of a product generating $2.5 billion in annualized revenue.&lt;/p&gt;
&lt;p&gt;By 4:23 AM, a security researcher named Chaofan Shou had found it and posted the discovery to X. By mid-morning, the codebase had been downloaded directly from Anthropic&#39;s own cloud storage, mirrored to GitHub, and forked tens of thousands of times. By afternoon, criminal groups were using the leak as bait to distribute malware.&lt;/p&gt;
&lt;p&gt;Nobody hacked Anthropic. Someone forgot to add one line to a configuration file.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-a-source-map-is-and-why-this-one-mattered&quot; tabindex=&quot;-1&quot;&gt;What a Source Map Is and Why This One Mattered&lt;/h2&gt;
&lt;p&gt;When software companies ship a product like Claude Code, they run their code through a build process that compresses and obfuscates it — transforming readable source files into a compact, unreadable bundle optimized for distribution. The result is efficient but undebuggable. If something goes wrong in production, the error trace points to line 1, column 284,000 of a single minified file. That tells you nothing useful.&lt;/p&gt;
&lt;p&gt;Source maps solve this problem. They are companion files — typically with a &lt;code&gt;.map&lt;/code&gt; extension — that act as a translation layer between the compressed production code and the original human-readable source. They exist for developers to debug crashes. They are an internal tool. They should never ship to end users.&lt;/p&gt;
&lt;p&gt;Anthropic&#39;s build toolchain uses the Bun JavaScript runtime, which the company acquired in late 2025. Bun generates source maps by default. The standard way to prevent them from being included in a published package is to add &lt;code&gt;*.map&lt;/code&gt; to the project&#39;s &lt;code&gt;.npmignore&lt;/code&gt; file — a list of files and patterns to exclude from publication. That line was missing.&lt;/p&gt;
&lt;p&gt;The source map file contained a reference to a zip archive hosted on Anthropic&#39;s Cloudflare R2 storage bucket. The bucket was publicly accessible. Anyone with the URL could download the complete, unobfuscated TypeScript source of Claude Code. Within minutes of Shou&#39;s post, thousands of people did exactly that.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;this-was-the-second-time&quot; tabindex=&quot;-1&quot;&gt;This Was the Second Time&lt;/h2&gt;
&lt;p&gt;What makes this incident harder to excuse is that it was not the first. On February 24, 2025 — Claude Code&#39;s original launch day — developer Dave Shoemaker found an 18-million-character inline source map in the same npm package. Anthropic pulled it within two hours.&lt;/p&gt;
&lt;p&gt;Thirteen months passed. The same bug, through the same vector, happened again.&lt;/p&gt;
&lt;p&gt;Boris Cherny, Anthropic&#39;s head of Claude Code, acknowledged it publicly. He described it as a manual deployment step that should have been automated — a fix that was identified after the first incident and not implemented before the second one occurred. The company&#39;s statement to the press was consistent: human error, no customer data exposed, measures being put in place to prevent recurrence.&lt;/p&gt;
&lt;p&gt;In the week before the leak, Anthropic had also experienced a separate CMS misconfiguration that exposed approximately 3,000 internal files, including details of an unreleased model. Two significant operational security failures in five days.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-the-code-revealed&quot; tabindex=&quot;-1&quot;&gt;What the Code Revealed&lt;/h2&gt;
&lt;p&gt;The technical community spent the hours after the leak analyzing what Anthropic had inadvertently published. Several findings stood out.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;KAIROS&lt;/strong&gt; — referenced over 150 times in the codebase — is an always-on background agent mode that Anthropic has not publicly announced. Unlike the current Claude Code, which responds to prompts, KAIROS operates as a persistent daemon. It watches, logs, and proactively takes action. It performs a process called &lt;code&gt;autoDream&lt;/code&gt; when the user is idle: consolidating memory, merging observations, and converting vague notes into structured facts. The feature is completely absent from external builds, gated behind compile-time flags that evaluate to false when Anthropic ships the public version.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Undercover Mode&lt;/strong&gt; is a subsystem that activates when an Anthropic employee uses Claude Code on a public or open-source repository. When active, it instructs the model not to reveal internal codenames, not to mention unreleased model versions, not to include &lt;code&gt;Co-Authored-By&lt;/code&gt; attribution in commits, and not to reference internal tools or Slack channels. The system prompt injected during Undercover Mode reads, in part: &lt;em&gt;&amp;quot;You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository... Do not blow your cover.&amp;quot;&lt;/em&gt; For external users, the entire undercover function is dead-code-eliminated — it does not exist in the version developers download.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Unreleased models&lt;/strong&gt; are referenced throughout the source by codename: Capybara, Tengu, and others. These names appear in the list of things Undercover Mode is specifically instructed to conceal.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;44 feature flags&lt;/strong&gt; were catalogued by developers analyzing the leak — covering features that are fully built but not yet enabled in external builds. These represent Anthropic&#39;s near-term product roadmap in considerable technical detail.&lt;/p&gt;
&lt;p&gt;None of this exposed customer data, API keys, or the underlying AI models themselves. What it exposed was the complete client-side architecture of the tool, its internal product strategy, and several features the company was not yet ready to announce.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-threat-that-followed&quot; tabindex=&quot;-1&quot;&gt;The Threat That Followed&lt;/h2&gt;
&lt;p&gt;Within hours of the leak becoming public, criminal groups began moving. The pattern is now familiar: a high-profile security event generates enormous search volume, and attackers optimize fake content to intercept that traffic before legitimate sources can rank.&lt;/p&gt;
&lt;p&gt;Zscaler&#39;s ThreatLabz team identified malicious GitHub repositories claiming to offer the leaked Claude Code source, specifically optimized to appear in Google search results for queries like &amp;quot;leaked Claude Code.&amp;quot; The repositories looked credible — they referenced the &lt;code&gt;.map&lt;/code&gt; file, mentioned the npm registry, and advertised &amp;quot;unlocked enterprise features&amp;quot; and no usage limits.&lt;/p&gt;
&lt;p&gt;The download was a 7-Zip archive named &lt;code&gt;Claude Code - Leaked Source Code.7z&lt;/code&gt;. Inside was a Rust-based executable named &lt;code&gt;ClaudeCode_x64.exe&lt;/code&gt;. Running it deployed two payloads: Vidar v18.7, a credential-stealing infostealer that harvests browser credentials, saved passwords, credit card data, and session cookies; and GhostSocks, a proxy tool that silently routes criminal network traffic through the infected machine.&lt;/p&gt;
&lt;p&gt;This is the same Vidar that has appeared throughout 2025 and 2026 as a payload in supply chain attacks — the same family covered in RedPosts&#39; earlier analysis of &lt;a href=&quot;https://redposts.com/posts/infostealer-malware-password-theft/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;infostealer mechanics&lt;/a&gt; The delivery mechanism changes. The payload does not.&lt;/p&gt;
&lt;p&gt;The real Claude Code source is available to browse in dozens of GitHub repositories. The danger is not reading it — it is downloading and executing archive files claiming to contain it. The moment an executable claiming to be leaked source code runs on your machine, the leak is no longer Anthropic&#39;s problem. It is yours.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-broader-supply-chain-problem&quot; tabindex=&quot;-1&quot;&gt;The Broader Supply Chain Problem&lt;/h2&gt;
&lt;p&gt;The Claude Code leak did not occur in isolation. On the same day — March 31, 2026, between 00:21 and 03:29 UTC, hours before the leak was discovered — attackers compromised axios, one of npm&#39;s most widely downloaded packages at 83 million weekly downloads, through a hijacked maintainer account. The malicious versions (1.14.1 and 0.30.4) contained an embedded Remote Access Trojan.&lt;/p&gt;
&lt;p&gt;Anyone who installed or updated Claude Code via npm during that specific window may have pulled in the compromised axios dependency alongside Anthropic&#39;s legitimately published package. Two completely separate attacks, same registry, same morning, affecting the same tool.&lt;/p&gt;
&lt;p&gt;If you installed or updated Claude Code via npm on March 31, 2026, check your project lockfiles — &lt;code&gt;package-lock.json&lt;/code&gt;, &lt;code&gt;yarn.lock&lt;/code&gt;, or &lt;code&gt;bun.lockb&lt;/code&gt; — for axios versions 1.14.1 or 0.30.4. If either appears, treat the environment as potentially compromised.&lt;/p&gt;
&lt;p&gt;For ongoing use, Anthropic recommends the native installer over npm: &lt;code&gt;curl -fsSL https://claude.ai/install.sh | sh&lt;/code&gt;. This bypasses the npm registry entirely for Claude Code installation.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-this-means&quot; tabindex=&quot;-1&quot;&gt;What This Means&lt;/h2&gt;
&lt;p&gt;The immediate damage from the leak itself is limited. Claude Code&#39;s source code is now permanently in public circulation — DMCA notices have been sent, some mirrors have been removed, and many more remain. The AI models are not exposed. Customer data is not exposed. The intellectual property loss is real but not catastrophic in isolation.&lt;/p&gt;
&lt;p&gt;The more significant story is what the incident reveals about the software supply chain that AI tools depend on. The npm registry distributes hundreds of thousands of packages, maintained by individual developers and large companies alike. Its security model is built on trust — that publishers are who they say they are, that packages contain what they claim to contain, and that build pipelines are configured correctly before publication.&lt;/p&gt;
&lt;p&gt;All three of those assumptions failed on the same day, affecting the same tool, through different mechanisms. One through a missing configuration line that had already been caught once before. One through a compromised maintainer account for a dependency used by 83 million developers weekly. The failures are unrelated. The ecosystem that made both of them simultaneously impactful is not.&lt;/p&gt;
&lt;p&gt;The advice is the same it has been since the first npm supply chain attack: verify before you install, check your lockfiles after anything unexpected, and treat executables from unofficial sources as hostile regardless of how credible the surrounding repository looks.&lt;/p&gt;
&lt;p&gt;Anthropic has since pulled version 2.1.88 and released a clean build. The source code remains in the wild. The deployment process that allowed it to ship is being automated to prevent recurrence. Whether that automation arrives before the third incident is the open question.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>One Deleted Line of Code Rerouted the Internet</title>
    <link href="https://redposts.com/posts/cloudflare-bgp-route-leak-2026/" />
    <updated>2026-04-07T00:00:00Z</updated>
    <id>https://redposts.com/posts/cloudflare-bgp-route-leak-2026/</id>
    <content type="html">&lt;p&gt;On January 22, 2026, at 20:25 UTC, an automated script ran on a single router in Cloudflare&#39;s Miami data center. Routine maintenance: removing a prefix list that was no longer needed after an infrastructure upgrade in Bogotá. The change had been reviewed. It looked clean. Nine lines, deleted.&lt;/p&gt;
&lt;p&gt;Twenty-five minutes later, Cloudflare&#39;s network engineers had manually reverted the change and paused all automation. In that window, approximately 12 gigabits per second of traffic had been dropped. External networks across multiple continents were affected. A router in Florida had spent 25 minutes telling the global internet to route traffic through paths it was never supposed to use.&lt;/p&gt;
&lt;p&gt;No data was stolen. No systems were permanently damaged. But the incident exposed something the internet&#39;s architects have known for decades and never fully fixed: the protocol that routes all global internet traffic is built on trust, not verification. And trust, at internet scale, fails in ways that are difficult to predict and impossible to fully prevent.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-protocol-that-runs-everything&quot; tabindex=&quot;-1&quot;&gt;The Protocol That Runs Everything&lt;/h2&gt;
&lt;p&gt;BGP — the Border Gateway Protocol — is the system by which the internet&#39;s 75,000 networks tell each other where to send traffic. Every network, called an autonomous system, announces to its neighbors which IP address ranges it owns. BGP collects those announcements into a continuously updated routing table: a live map telling every network on the planet the best path to every destination.&lt;/p&gt;
&lt;p&gt;The map works because networks trust each other&#39;s announcements. If Cloudflare says it is responsible for a particular block of addresses, its neighbors believe it and propagate that information to their neighbors, who propagate it to theirs. The announcement spreads across the internet in seconds.&lt;/p&gt;
&lt;p&gt;There is no cryptographic verification. A network claiming ownership of an address block is taken at its word. This is not an oversight — it was a deliberate architectural decision made in 1989, when the internet was a small network of research institutions and adversarial behavior was not a design consideration. BGP was built for a different internet. It now runs a very different one.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-actually-happened-in-miami&quot; tabindex=&quot;-1&quot;&gt;What Actually Happened in Miami&lt;/h2&gt;
&lt;p&gt;The January 22 incident was not a hack. It was a configuration error — the kind that happens when automation removes a constraint that was doing more work than anyone realized.&lt;/p&gt;
&lt;p&gt;Cloudflare&#39;s engineers were cleaning up BGP announcements from Miami that related to a Bogotá data center no longer needing them. The change removed nine prefix list references across several export policies. A sample of the diff, showing the removal for two of those transit providers, looked like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;[edit policy-options policy-statement 6-TELIA-ACCEPT-EXPORT term ADV-SITELOCAL-GRE-RECEIVER from]
-      prefix-list 6-BOG04-SITE-LOCAL;
[edit policy-options policy-statement 6-LEVEL3-ACCEPT-EXPORT term ADV-SITELOCAL-GRE-RECEIVER from]
-      prefix-list 6-BOG04-SITE-LOCAL;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Nine lines removed. Change reviewed and merged. Automation pushed it to the router.&lt;/p&gt;
&lt;p&gt;The problem was what remained. Without that specific prefix list acting as a boundary, the export policy defaulted to its surviving rule: &lt;code&gt;route-type internal&lt;/code&gt; — an instruction that, in the operating system running on these routers, essentially means &amp;quot;share everything we know about our own internal network.&amp;quot;&lt;/p&gt;
&lt;p&gt;Cloudflare had accidentally removed the filter keeping its internal traffic map private. The Miami router picked up a megaphone and started broadcasting Cloudflare&#39;s private internal routes to the public internet — telling every network on the planet that the best path for that traffic ran through Florida.&lt;/p&gt;
&lt;p&gt;The internet believed it. Traffic arrived from providers and peers who had no reason to question the announcement. BGP does not verify, it routes. The Miami data center was not built to handle it. Firewall filters designed to accept only Cloudflare&#39;s own traffic started dropping packets. Congestion built on backbone links. Legitimate customer traffic was delayed or lost.&lt;/p&gt;
&lt;p&gt;Cloudflare&#39;s team detected the anomaly within 15 minutes. A network operator manually reverted the configuration. Twenty-five minutes after it started, it was over.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-automation-paradox&quot; tabindex=&quot;-1&quot;&gt;The Automation Paradox&lt;/h2&gt;
&lt;p&gt;What makes this incident instructive is not that it happened — BGP route leaks happen regularly, to networks of every size — but what triggered it. Not a tired engineer making a mistake at 2am. A change reviewed and merged through Cloudflare&#39;s policy automation platform. A change that looked correct because, in isolation, it was. The deleted lines were unnecessary. The problem was the interaction between what was removed and what remained — a condition no reviewer caught.&lt;/p&gt;
&lt;p&gt;This is the central tension of modern internet infrastructure: the same tools that make it possible to manage tens of thousands of routers — infrastructure-as-code, policy automation, configuration management pipelines — are the tools that can turn a local error into a global event in seconds. A manual change affects one router at the pace a human can type. An automated change hits every router the platform touches, simultaneously, at machine speed.&lt;/p&gt;
&lt;p&gt;The pattern has a clear trajectory. The 2008 Pakistan Telecom incident that took YouTube offline for two hours: a manual misconfiguration by a single engineer. The 2019 Verizon incident that disrupted large portions of US internet traffic: a misconfigured BGP optimizer, an automated system. The 2021 Facebook outage that took down Facebook, Instagram, and WhatsApp globally for six hours: a command sent to the global backbone through remote access tooling. Each incident more automated than the last. Each one propagated faster.&lt;/p&gt;
&lt;p&gt;The network engineering community has a phrase for this: BGP is correct until it isn&#39;t. When it isn&#39;t, it propagates that incorrectness to every network that trusts it — which is all of them.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;why-redundancy-doesn&#39;t-help&quot; tabindex=&quot;-1&quot;&gt;Why Redundancy Doesn&#39;t Help&lt;/h2&gt;
&lt;p&gt;When the internet breaks, the standard response is redundancy: multiple ISPs, backup data centers, failover routing. Against a BGP leak, redundancy is theater.&lt;/p&gt;
&lt;p&gt;If your primary connection goes down, your router switches to a backup. But if your backup provider has also accepted the poisoned BGP route — which it will, because BGP trusts its neighbors — it will faithfully route your traffic into the same black hole. Both paths lead to the same wrong destination. Think of it as a GPS outage that affects every map app simultaneously. Having two apps doesn&#39;t help if both are pulling from the same corrupted signal.&lt;/p&gt;
&lt;p&gt;Firewalls and encrypted VPNs are equally useless here. They operate above the routing layer. If the road itself has been rerouted off a cliff, it doesn&#39;t matter how secure your car is. The protection mechanisms most people rely on assume the underlying routing is correct. When it isn&#39;t, they have nothing to work with.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-fix-that-exists-and-isn&#39;t-deployed&quot; tabindex=&quot;-1&quot;&gt;The Fix That Exists and Isn&#39;t Deployed&lt;/h2&gt;
&lt;p&gt;There is a solution to BGP&#39;s trust problem. RPKI — Resource Public Key Infrastructure — adds cryptographic verification to BGP route announcements. A network using RPKI signs its announcements with a digital certificate tied to its registered IP address allocations. Think of it as a passport for routing data: it proves the network actually owns the addresses it claims to represent. Networks that validate those signatures can automatically reject announcements that don&#39;t match — catching both accidental leaks and deliberate hijacks before they propagate.&lt;/p&gt;
&lt;p&gt;RPKI alone wouldn&#39;t have entirely prevented the Miami incident, since the leak originated from within Cloudflare&#39;s own legitimate network. But pairing it with the newer ASPA standard — Autonomous System Provider Authorization — would allow downstream networks to detect and drop these specific routing anomalies before they spread. As of early 2026, RPKI covers roughly 40 percent of global routing. The remaining 60 percent has no cryptographic protection whatsoever. The standard exists. Adoption is voluntary and slow.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-you-can-actually-do-with-this&quot; tabindex=&quot;-1&quot;&gt;What You Can Actually Do With This&lt;/h2&gt;
&lt;p&gt;You cannot personally deploy RPKI. What you can do is change how you diagnose outages.&lt;/p&gt;
&lt;p&gt;If your team suddenly loses access to a major cloud service — Microsoft 365, AWS, Google Workspace — while the rest of your internet appears to work normally, stop rebooting local hardware. Stop calling your ISP. The problem may be the global routing map, not your connection. Check &lt;a href=&quot;https://radar.cloudflare.com&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;Cloudflare Radar&lt;/a&gt; or &lt;a href=&quot;https://bgpstream.crosswork.cisco.com&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;BGPStream&lt;/a&gt;. If a route leak is active, local troubleshooting is useless. You are waiting for a network operator somewhere on the internet to manually correct the map. Depending on the incident, that takes minutes or hours.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-larger-problem&quot; tabindex=&quot;-1&quot;&gt;The Larger Problem&lt;/h2&gt;
&lt;p&gt;The Miami incident lasted 25 minutes. It was caught because Cloudflare has sophisticated monitoring and a team that responded within 15 minutes. That&#39;s a fast response by any measure. It was still long enough to cause measurable impact across multiple continents.&lt;/p&gt;
&lt;p&gt;BGP hijacks used to intercept traffic — routing it through a third-party network where it can be inspected before being forwarded to its intended destination — are documented and ongoing. The same trust model that made the Miami incident possible makes those attacks possible too. The difference is intent.&lt;/p&gt;
&lt;p&gt;BGP&#39;s designers in 1989 were solving a connectivity problem, not a security problem. The protocol has been patched and extended — RPKI, ASPA, BGP roles — but the trust model at its foundation hasn&#39;t been replaced. It can&#39;t be replaced without coordinated action from every major network on earth, which is why it hasn&#39;t happened.&lt;/p&gt;
&lt;p&gt;Nine lines of configuration code. Twenty-five minutes. Twelve gigabits per second of dropped traffic. The incident is resolved. The conditions that made it possible are not.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;em&gt;Cloudflare&#39;s full incident report for January 22, 2026 is available at &lt;a href=&quot;https://blog.cloudflare.com/route-leak-incident-january-22-2026/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;blog.cloudflare.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>The Encryption Fight Isn&#39;t Over. It Just Got Quieter.</title>
    <link href="https://redposts.com/posts/eu-chat-control-csar-2026/" />
    <updated>2026-04-08T00:00:00Z</updated>
    <id>https://redposts.com/posts/eu-chat-control-csar-2026/</id>
    <content type="html">&lt;p&gt;The version of Chat Control that dominated the headlines is gone. The proposal that would have forced platforms to scan every private message, broken end-to-end encryption, and flagged millions of ordinary conversations to law enforcement — that version is off the table. The EU Parliament made that clear. And the interim scanning law that held things together expired on April 3, 2026.&lt;/p&gt;
&lt;p&gt;What&#39;s left is Chat Control 2.0. It&#39;s more complicated. And it&#39;s designed to be.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-just-happened&quot; tabindex=&quot;-1&quot;&gt;What Just Happened&lt;/h2&gt;
&lt;p&gt;On March 11, the European Parliament voted 458 to 103 to extend the existing temporary scanning framework — but with a critical condition. Any scanning had to be strictly targeted, limited to users specifically identified by a judge as reasonably suspected of involvement in child sexual abuse. Mass, indiscriminate scanning of entire user populations would not be permitted.&lt;/p&gt;
&lt;p&gt;The Council — representing EU member state governments — rejected that condition. They wanted broader scanning powers, without the judicial requirement. Trilogue negotiations broke down.&lt;/p&gt;
&lt;p&gt;Then came an unusual move. Around March 20, conservative factions pushed for a vote on March 26. Digital rights group EDRi argued publicly that the Council had not accepted a single one of Parliament&#39;s substantive demands, and that the push was an attempt to rewrite the outcome after negotiators failed to get what they wanted. The vote failed. By 307 votes to 306 — a single vote — Parliament refused to extend the framework on the Council&#39;s terms. The interim law expired on April 3 with no replacement in place.&lt;/p&gt;
&lt;p&gt;That is the win. It is not the end.&lt;/p&gt;
&lt;p&gt;We covered the lead-up to this vote in detail in our &lt;a href=&quot;https://redposts.com/posts/eu-chat-control-encryption/&quot; target=&quot;_blank&quot; rel=&quot;noopener noreferrer&quot;&gt;March 18 piece&lt;/a&gt;. What follows is what happened next.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-chat-control-2.0-actually-proposes&quot; tabindex=&quot;-1&quot;&gt;What Chat Control 2.0 Actually Proposes&lt;/h2&gt;
&lt;p&gt;The permanent regulation — formally called the Child Sexual Abuse Regulation, or CSAR — has been in negotiation since 2022. The Council&#39;s revised text dropped the most controversial mandatory encryption-breaking requirement. It preserved something more subtle.&lt;/p&gt;
&lt;p&gt;Three components are worth understanding.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Voluntary scanning with consequences.&lt;/strong&gt; The mandatory detection orders are gone — for now. Platforms can choose to scan unencrypted messages. But that &amp;quot;choice&amp;quot; comes with regulatory architecture attached. Platforms that don&#39;t scan face mandatory risk assessments and must demonstrate they have adopted &amp;quot;all reasonable mitigation measures.&amp;quot; Scanning becomes the path of least resistance. The EFF has warned that this model could lead to private mass-scanning of non-encrypted services and pressure big providers to limit the kinds of secure communication tools they offer. The word &amp;quot;voluntary&amp;quot; is doing a lot of work in a framework where the alternative is regulatory hostility.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Risk mitigation obligations.&lt;/strong&gt; Services classified as high-risk — which includes most social media and messaging platforms — must actively demonstrate they are addressing the risk of child abuse on their platforms. Patrick Breyer, the German digital rights lawyer who has tracked this legislation since it began, has described these requirements as effectively punishing privacy-respecting services — forcing them to implement surveillance tools to avoid liability. Platforms that protect user privacy are treated as suspect. Platforms that scan are compliant.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Mandatory age verification.&lt;/strong&gt; This is the element that gets the least attention and may matter most. Chat Control 2.0 drops mandatory scanning of end-to-end encrypted messages, but retains a requirement that users verify their age before accessing encrypted messaging services. Under that framework, anonymous encrypted communication ends. The encryption survives. The anonymity doesn&#39;t.&lt;/p&gt;
&lt;p&gt;This is a meaningful distinction that gets lost in most coverage. End-to-end encryption protects the content of what you say. Age verification eliminates the ability to communicate without being identified. A government cannot read your messages — but it knows who you are, when you sent them, and who you sent them to. For journalists, activists, abuse survivors, and anyone who relies on the ability to communicate privately, that is not a minor concession. The infrastructure of surveillance doesn&#39;t need to read your messages if it knows everything else.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;why-the-revised-version-is-more-dangerous&quot; tabindex=&quot;-1&quot;&gt;Why the Revised Version Is More Dangerous&lt;/h2&gt;
&lt;p&gt;The original Chat Control proposal was relatively easy to explain. It would have broken encryption. Full stop. That argument landed with technologists, civil liberties groups, and a significant portion of Parliament.&lt;/p&gt;
&lt;p&gt;Chat Control 2.0 does not break encryption directly. It surrounds encryption with conditions that make it difficult to use privately. Voluntary scanning pressure. Risk mitigation liability. Mandatory identity verification. None of these individually constitute a backdoor in the technical sense. Together, they achieve a similar outcome through regulatory architecture rather than code.&lt;/p&gt;
&lt;p&gt;That&#39;s a deliberate strategic adjustment. Laws that ban a technology are easy to identify and challenge. Laws that create conditions where using that technology safely becomes legally or commercially impractical are easier to pass unnoticed and impossible to explain in a headline. By the time most people understand what this version of the law actually does, it may have already passed.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-timeline&quot; tabindex=&quot;-1&quot;&gt;The Timeline&lt;/h2&gt;
&lt;p&gt;Trilogue negotiations are running on a fixed schedule. The April 16 session focuses on the legal framework for detection orders and the treatment of encryption — the most consequential session in the near term. A third session is scheduled for May 4, covering risk assessment obligations. A fourth and presumably final negotiation is set for June 29, with formal adoption by Parliament and Council expected in July 2026.&lt;/p&gt;
&lt;p&gt;The expiration of the interim law has added pressure. Proponents of broader scanning are framing the current absence of any legal framework as a &amp;quot;regulatory gap&amp;quot; — arguing that children are now less protected than they were two weeks ago. Whether that framing gains traction in the June session will largely determine what the final regulation looks like.&lt;/p&gt;
&lt;p&gt;The April 16 session is the most important near-term indicator. If the Council moves toward accepting Parliament&#39;s position — targeted scanning under judicial authorization only, no age verification mandate, encryption explicitly protected — there is a path to a regulation that addresses child safety without dismantling private communication infrastructure. If the Council holds, June becomes a high-stakes negotiation between two institutions with fundamentally different views of what this law should do.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-to-watch&quot; tabindex=&quot;-1&quot;&gt;What to Watch&lt;/h2&gt;
&lt;p&gt;The Center for Democracy and Technology has noted that the forthcoming regulation presents an opportunity to protect against both a reintroduction of indiscriminate mass scanning and other mechanisms that undermine online anonymity — and that mitigation measures must not inadvertently harm the people the legislation is supposed to protect.&lt;/p&gt;
&lt;p&gt;That framing matters. The debate is no longer about one bad proposal that can be blocked. It is about a set of interlocking obligations that, individually, sound reasonable — reduce risk, protect children, verify age — and, in combination, produce a surveillance framework without anyone having to call it one.&lt;/p&gt;
&lt;p&gt;The debate is not over. It has moved into a phase where it is harder to follow and easier to lose.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>Before Your Browser Connects, Something Else Decides Who Answers</title>
    <link href="https://redposts.com/posts/dns-trust-problem/" />
    <updated>2026-04-10T00:00:00Z</updated>
    <id>https://redposts.com/posts/dns-trust-problem/</id>
    <content type="html">&lt;p&gt;Open a browser and type a web address. Before anything loads, before any connection is made, your device sends a question to a server you almost certainly didn&#39;t choose and have probably never thought about. That server looks up the address, returns an answer, and your device connects to whatever it was told to connect to.&lt;/p&gt;
&lt;p&gt;The system doing this is called DNS — the Domain Name System. It translates human-readable addresses into the numerical IP addresses machines use to find each other. Every device on the internet uses it, constantly, for everything. Unlike most of the infrastructure that powers the web, DNS operates almost entirely on trust rather than verification.&lt;/p&gt;
&lt;p&gt;That distinction is the entire problem.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;how-the-lookup-works&quot; tabindex=&quot;-1&quot;&gt;How the Lookup Works&lt;/h2&gt;
&lt;p&gt;When you type an address into your browser, your operating system doesn&#39;t go looking for it directly. It asks a resolver — a dedicated server whose job is to find the answer on your behalf.&lt;/p&gt;
&lt;p&gt;That resolver is almost certainly configured by your router. Your router got its configuration from your ISP when it first connected. Unless you&#39;ve explicitly changed it, every DNS query from every device on your network flows through a server you&#39;ve never interacted with and have no direct visibility into.&lt;/p&gt;
&lt;p&gt;The resolver works down a hierarchy. It starts at the top — asking one of thirteen root nameservers which server is responsible for the relevant top-level domain (.com, .org, .net). It then asks that TLD nameserver which server is authoritative for the specific domain. It then asks that authoritative server for the actual address. The authoritative server is the final source of truth — the server the domain owner controls and has explicitly configured.&lt;/p&gt;
&lt;p&gt;That entire chain — root, TLD, authoritative — typically completes in milliseconds. The resolver caches the answer so it doesn&#39;t have to repeat the process every time the same domain is requested. The cached answer lives for a period defined by the domain owner, called the TTL — time to live. While it&#39;s cached, every device using that resolver trusts it completely.&lt;/p&gt;
&lt;p&gt;Three points in this chain can be compromised. Each one breaks trust differently. Each one has been actively exploited.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-router&quot; tabindex=&quot;-1&quot;&gt;The Router&lt;/h2&gt;
&lt;p&gt;The most direct attack doesn&#39;t touch DNS infrastructure at all. It changes which server your device questions in the first place.&lt;/p&gt;
&lt;p&gt;Your device receives its resolver address from your router via DHCP — the same protocol that assigns your IP address when you join a network. DHCP is the welcome packet your router hands every device that connects: here is your address, here is the gateway, here is who to ask when you need to find something. If an attacker controls the router, they control the welcome packet.&lt;/p&gt;
&lt;p&gt;Changing one field in the DHCP configuration — the DNS server address — is enough to redirect every query from every device on the network to a server the attacker controls. That server can operate mostly honestly, returning correct answers for the vast majority of requests so nothing appears wrong. But for specific targets — a banking login, a corporate email portal, a government authentication system — it returns a different address, one pointing to infrastructure the attacker controls, running a convincing copy of the real destination.&lt;/p&gt;
&lt;p&gt;The user arrives at a page that looks identical to the one they intended to reach. They enter their credentials. The attacker captures them. The server then forwards the connection to the real destination so the user logs in successfully and notices nothing unusual. No malware was installed. No user made a mistake. The compromise happened entirely in the infrastructure between the device and the internet.&lt;/p&gt;
&lt;p&gt;This has been running at scale against government ministries, diplomatic organizations, and military agencies across multiple continents. It works because home and small-office routers are the least-monitored assets on most networks — configured once, forgotten, left running unpatched firmware for years. Exploiting a documented, unpatched router vulnerability requires minimal effort. The payoff is passive, persistent access to the DNS traffic of every device behind that router, with the option to selectively intercept specific connections when a high-value target appears.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-resolver&#39;s-cache&quot; tabindex=&quot;-1&quot;&gt;The Resolver&#39;s Cache&lt;/h2&gt;
&lt;p&gt;Even if the resolver your device uses is legitimate and unmodified, its cache can be corrupted from outside.&lt;/p&gt;
&lt;p&gt;DNS queries and responses travel over UDP — a lightweight protocol that sends packets without confirming delivery or verifying the sender&#39;s identity. When a resolver sends a query to an authoritative server, it assigns that query a transaction ID — a 16-bit number — and expects the response to carry the same ID back. An attacker who can send a forged response with the correct transaction ID, arriving before the legitimate server&#39;s real response, can insert a false record into the resolver&#39;s cache.&lt;/p&gt;
&lt;p&gt;Once poisoned, the cache serves the forged answer to every device querying through that resolver, for as long as the record&#39;s TTL keeps it alive. The attacker doesn&#39;t need to intercept individual connections in real time. The corrupted cache does the work automatically, for every subsequent request.&lt;/p&gt;
&lt;p&gt;In 2008, security researcher Dan Kaminsky demonstrated exactly how exploitable this was. Resolvers at the time used a fixed source port for outgoing queries, meaning the only variable an attacker had to guess was the 16-bit transaction ID — 65,536 possible values. Kaminsky found that rather than waiting for a cached record to expire and then racing a single query, he could flood the resolver with requests for non-existent subdomains of the target domain, forcing fresh queries continuously. Each fresh query was a new race — thousands of forged responses sent with different transaction IDs, trying to land the correct one before the legitimate server responded.&lt;/p&gt;
&lt;p&gt;More critically, he found that rather than poisoning a single address record, he could target the authoritative nameserver record for an entire domain. Poisoning that record means the resolver stops asking the real authoritative server entirely — it asks the attacker&#39;s server instead. Every address under that domain, for every device using that resolver, goes wherever the attacker directs.&lt;/p&gt;
&lt;p&gt;The immediate response to Kaminsky&#39;s disclosure was to randomize both the transaction ID and the source port, expanding the guessing space from 65,536 to over four billion combinations. This made straightforward cache poisoning significantly harder, not impossible. Researchers have since demonstrated side-channel attacks that infer the source port through network timing, substantially collapsing the effective entropy. The patch raised the bar. The underlying architecture stayed the same.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-authoritative-record&quot; tabindex=&quot;-1&quot;&gt;The Authoritative Record&lt;/h2&gt;
&lt;p&gt;The deepest attack in the DNS chain goes after the authoritative nameserver directly — the server every resolver in the world ultimately defers to for a given domain.&lt;/p&gt;
&lt;p&gt;In 2019, Cisco Talos documented a campaign targeting the DNS registrars and registries managing authoritative nameserver records for national security organizations across the Middle East and North Africa. The attackers didn&#39;t compromise the target organizations&#39; own systems. They compromised the registrar accounts — the management interfaces used to update DNS records — and changed the records themselves to point to infrastructure they controlled.&lt;/p&gt;
&lt;p&gt;When authoritative DNS records are modified at the registrar level, no cache needs to be poisoned and no resolver needs to be compromised. The correct answer, according to the global DNS hierarchy, is now the attacker&#39;s answer. Every resolver on the internet fetches it, caches it, and serves it. Email and web traffic for the affected organizations was silently intercepted for months before anyone noticed.&lt;/p&gt;
&lt;p&gt;A related technique exploits organizational housekeeping failures rather than active intrusion. When an organization decommissions a cloud service — a storage bucket, a CDN endpoint, a hosted application — it sometimes forgets to remove the DNS record pointing to it. That record keeps resolving, referencing an address that no longer belongs to anyone. An attacker who registers that abandoned resource inherits the DNS entry. Subdomains of major institutions — government health agencies, large professional services firms — have been successfully hijacked this way, not through any breach of the institution&#39;s own systems, but through stale records pointing to infrastructure anyone could claim.&lt;/p&gt;
&lt;p&gt;The common thread is that DNS trust flows downward through the hierarchy without expiration. Once a record says something, the whole system believes it until someone authoritative changes it. The assumption built into this design — that authoritative records are controlled by the people who own the domain — holds until it doesn&#39;t.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;why-https-only-partially-helps&quot; tabindex=&quot;-1&quot;&gt;Why HTTPS Only Partially Helps&lt;/h2&gt;
&lt;p&gt;The standard assumption is that HTTPS handles this. The padlock means the connection is encrypted and the server&#39;s identity has been verified — so even if DNS sends you to the wrong address, the certificate check catches it and the browser warns you.&lt;/p&gt;
&lt;p&gt;This is true for many DNS attacks. An attacker redirecting your traffic to a server they control generally cannot present a valid certificate for the real domain, because certificate authorities verify domain ownership before issuing one. The browser&#39;s certificate check fails. A warning appears.&lt;/p&gt;
&lt;p&gt;But the check only works if the user stops.&lt;/p&gt;
&lt;p&gt;Years of certificate errors on internal corporate systems, self-signed certificates on network equipment, and misconfigured HTTPS on minor sites have trained many users to treat these warnings as friction. Clicking through on a compromised network triggers no further protection. The connection proceeds. The session is intercepted.&lt;/p&gt;
&lt;p&gt;HTTPS protects the channel. DNS hijacking attacks the addressing layer underneath it. The protection holds only where users treat certificate warnings as the hard stop they are designed to be — which means the security model depends less on cryptography than on behavior.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-fix-that-has-existed-since-2005&quot; tabindex=&quot;-1&quot;&gt;The Fix That Has Existed Since 2005&lt;/h2&gt;
&lt;p&gt;DNSSEC — DNS Security Extensions — adds cryptographic signatures to DNS responses. An authoritative server with DNSSEC enabled signs its records with a private key. The corresponding public key is published up the DNS hierarchy, creating a verifiable chain of trust from the root nameservers down to the specific domain. A validating resolver checks those signatures before accepting a response. A forged or tampered record fails validation and gets rejected.&lt;/p&gt;
&lt;p&gt;DNSSEC would stop cache poisoning. It would have caught the registrar-level modifications in the 2019 campaign. It is a direct technical answer to the core problem.&lt;/p&gt;
&lt;p&gt;It was standardized in 2005. As of February 2026, 4.27% of domains have it enabled.&lt;/p&gt;
&lt;p&gt;That figure comes from an analysis of 240 million domains using actual TLD zone files — not a survey or an estimate. Just over 10 million domains carried valid DNSSEC signatures. Among resolvers, the global validation rate sits at approximately 35%, meaning that even for signed domains, roughly two thirds of DNS queries worldwide are processed by resolvers that don&#39;t check signatures at all.&lt;/p&gt;
&lt;p&gt;The reasons are structural. DNSSEC requires coordinated action from two parties with no contractual relationship: the operator of the authoritative nameserver, and the operator of the recursive resolver used by end users. Neither is required to act. There is no padlock for DNSSEC, no visible signal to users that a domain is protected or unprotected, no consequence that makes the absence immediately apparent to anyone.&lt;/p&gt;
&lt;p&gt;There is also a failure mode that actively discourages adoption. A misconfigured DNSSEC setup — an expired signature, a botched key rollover, a mismatched record — causes validating resolvers to return an error rather than fall back to an insecure response. The domain goes completely unreachable for everyone using a validating resolver. Misconfiguration is not rare, and the consequences are severe enough that organizations where uptime matters have consistently chosen not to deploy a security extension that could take them offline if anything goes wrong.&lt;/p&gt;
&lt;p&gt;The comparison to HTTPS is instructive and slightly depressing. HTTPS went from under 40% adoption to over 90% in a few years — not because of a technical breakthrough, but because Let&#39;s Encrypt made certificates free and automated, and browsers started showing visible warnings on unencrypted pages. The incentive structure changed, and adoption followed. DNSSEC has neither a free automated deployment path for most registrars nor a user-visible signal when validation fails. Twenty years after standardization, the incentive structure hasn&#39;t changed, and the numbers reflect it.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-you&#39;re-left-with&quot; tabindex=&quot;-1&quot;&gt;What You&#39;re Left With&lt;/h2&gt;
&lt;p&gt;DNS is not a legacy system waiting to be replaced. It is the active infrastructure through which every internet connection begins — no parallel systems, no fallbacks, no opt-outs. When you connect to anything, your device asks a resolver where to go. The answer determines the destination.&lt;/p&gt;
&lt;p&gt;Three points in the resolution chain are actively exploited: the resolver your device uses, the cache that resolver maintains, and the authoritative records at the top of the hierarchy. The methods differ. The outcome is the same — your device connects somewhere it didn&#39;t intend to, with no indication anything is wrong.&lt;/p&gt;
&lt;p&gt;The cryptographic fix exists. It covers four percent of domains. The resolver configuration that would protect your device is sitting in a router you&#39;ve probably never logged into since the day it was installed.&lt;/p&gt;
&lt;p&gt;Certificate warnings are not browser bugs or security theater. They are the last signal the system is capable of producing when something in the addressing layer has gone wrong. A certificate mismatch on a login page — on any network you don&#39;t fully control, a hotel, an office, a shared connection — is not friction to click through. It is the only moment in the entire DNS resolution process where the system can tell you that the answer it received might not have been the right one.&lt;/p&gt;
&lt;p&gt;Most people never think about DNS. It&#39;s designed to be invisible. The attacks that exploit it are designed to stay that way. The certificate warning is the one moment that design breaks down — and it only works if you treat it as the warning it is.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>Both Apps Use the Same Encryption. They Are Not the Same App.</title>
    <link href="https://redposts.com/posts/signal-vs-whatsapp/" />
    <updated>2026-04-14T00:00:00Z</updated>
    <id>https://redposts.com/posts/signal-vs-whatsapp/</id>
    <content type="html">&lt;p&gt;In 2016, a federal grand jury in the Eastern District of Virginia subpoenaed Signal&#39;s records on two of its users. Signal complied. It handed over the date each account was created and the date each account last connected to Signal&#39;s servers. Nothing else. No messages. No contacts. No conversation history. No location data. No device identifiers.&lt;/p&gt;
&lt;p&gt;By the end of 2021, Signal had received two more subpoenas — one from the Central District of California in the spring, another from Santa Clara County in the fall. Same result each time. Two data points per user.&lt;/p&gt;
&lt;p&gt;WhatsApp uses the same cryptographic protocol as Signal.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-the-protocol-actually-does&quot; tabindex=&quot;-1&quot;&gt;What the Protocol Actually Does&lt;/h2&gt;
&lt;p&gt;The Signal Protocol is a cryptographic system developed by Open Whisper Systems and released in 2013. It combines the Double Ratchet algorithm, elliptic curve Diffie-Hellman key exchange, and one-time prekeys to achieve two properties: end-to-end encryption and forward secrecy.&lt;/p&gt;
&lt;p&gt;End-to-end encryption means a message is encrypted on your device before it leaves, passes through the provider&#39;s servers in scrambled form, and is unscrambled only on the recipient&#39;s device. The server moves data it cannot read.&lt;/p&gt;
&lt;p&gt;Forward secrecy means a unique encryption key is generated for each message. If one key is ever exposed, it cannot be used to decrypt earlier messages. Each key is derived, used once, and discarded.&lt;/p&gt;
&lt;p&gt;Both Signal and WhatsApp implement this protocol. WhatsApp completed a full rollout across all message types in 2016. Will Cathcart, WhatsApp&#39;s head, has said explicitly that WhatsApp uses the same security protocol as Signal. That is accurate. The layer protecting message content is the same on both platforms.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;metadata&quot; tabindex=&quot;-1&quot;&gt;Metadata&lt;/h2&gt;
&lt;p&gt;When you send a message on WhatsApp, the content is encrypted. The metadata — who you sent it to, when, how often, from which device, from which IP address, how large the message was, whether it contained media — is not. WhatsApp&#39;s privacy policy confirms it collects this and shares it with Meta: account information, device identifiers, usage patterns, connection information, IP addresses, location data, and information about the people and groups you communicate with.&lt;/p&gt;
&lt;p&gt;In 2014, former NSA director Michael Hayden said at a public debate at Johns Hopkins University that the US government kills people based on metadata — not message content. Communication patterns alone, who contacts whom, at what times, from what locations, with what regularity, can be enough to identify and target individuals. WhatsApp&#39;s messages are protected. The records surrounding them are collected and retained.&lt;/p&gt;
&lt;p&gt;Signal&#39;s privacy policy states it retains almost none of this. The three subpoenas are the most concrete evidence of what that means: the records were requested, the company complied, and there was almost nothing there.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;sealed-sender&quot; tabindex=&quot;-1&quot;&gt;Sealed Sender&lt;/h2&gt;
&lt;p&gt;In 2018, Signal introduced a feature called Sealed Sender. Even with end-to-end encryption in place, a messaging server traditionally needs to know where to route each message — which means knowing, at minimum, who sent what to whom.&lt;/p&gt;
&lt;p&gt;Sealed Sender restructures the message so the server can deliver it without learning the sender&#39;s identity. The sender encrypts the entire message — including their own identity — using the recipient&#39;s public key. The server receives a package it can route to the recipient&#39;s device, but cannot open to determine where it came from.&lt;/p&gt;
&lt;p&gt;The recipient&#39;s device decrypts the outer layer, recovers the sender&#39;s identity, and verifies it. The server handled delivery. It had no record of who initiated it.&lt;/p&gt;
&lt;p&gt;Signal published the full technical specification in 2018. WhatsApp does not implement Sealed Sender. Meta&#39;s servers have a record, for every message sent through WhatsApp, of who sent it to whom.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;the-backup-problem&quot; tabindex=&quot;-1&quot;&gt;The Backup Problem&lt;/h2&gt;
&lt;p&gt;For most of WhatsApp&#39;s history, the most significant gap between the two apps had nothing to do with what happened inside them.&lt;/p&gt;
&lt;p&gt;By default, WhatsApp backs up message history to Google Drive on Android and iCloud on iOS. For years, those backups were stored in a form that Google and Apple could read. A conversation protected from Meta in transit was sitting unprotected in third-party cloud storage after delivery — accessible to those companies, and to any legal process directed at them.&lt;/p&gt;
&lt;p&gt;WhatsApp introduced encrypted backups in 2021, protected by a user-held password or a 64-character key. The feature is opt-in and not enabled by default. For users who haven&#39;t turned it on, that message history remains in third-party cloud storage without encryption.&lt;/p&gt;
&lt;p&gt;Signal does not back up message history to external cloud services.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-can-and-cannot-be-verified&quot; tabindex=&quot;-1&quot;&gt;What Can and Cannot Be Verified&lt;/h2&gt;
&lt;p&gt;Signal&#39;s client application is open source. Independent researchers can read the code, audit it, and verify that the app behaves as described.&lt;/p&gt;
&lt;p&gt;WhatsApp uses the Signal Protocol, which is open source. WhatsApp&#39;s client application itself is proprietary. The protocol is publicly verified. What happens inside the app before a message is encrypted — how data is handled, what runs in the background, how keys are managed — cannot be independently checked. WhatsApp&#39;s privacy claims depend on trusting Meta&#39;s representations.&lt;/p&gt;
&lt;p&gt;In 2019, a vulnerability in WhatsApp&#39;s proprietary code allowed attackers to install spyware on a device simply by placing a call that didn&#39;t need to be answered. The flaw was unrelated to the Signal Protocol. It was in the closed application code, and was being used against journalists and activists before it was identified and patched.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;post-quantum&quot; tabindex=&quot;-1&quot;&gt;Post-Quantum&lt;/h2&gt;
&lt;p&gt;In September 2023, Signal updated its key exchange to include a layer designed to resist future quantum computing attacks. The concern is a specific scenario: an adversary records encrypted traffic today, stores it, and decrypts it years from now once quantum computers capable of breaking current encryption exist. The capability doesn&#39;t currently exist, but it&#39;s a plausible enough long-term risk that it has a name — &amp;quot;harvest now, decrypt later.&amp;quot;&lt;/p&gt;
&lt;p&gt;The update, called PQXDH (Post-Quantum Extended Diffie-Hellman), adds a second cryptographic layer based on a different class of math — one considered harder for quantum computers to break. In October 2025, Signal extended this further with SPQR (Sparse Post-Quantum Ratchet), which applies that protection not just to the initial connection but to every message in a conversation.&lt;/p&gt;
&lt;p&gt;WhatsApp has not announced comparable changes.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&quot;what-the-differences-add-up-to&quot; tabindex=&quot;-1&quot;&gt;What the Differences Add Up To&lt;/h2&gt;
&lt;p&gt;WhatsApp retains metadata about who communicates with whom and when. Its servers have a record of sender and recipient for every message. Users who haven&#39;t opted into encrypted backups have message history stored in Google&#39;s or Apple&#39;s infrastructure. The application code cannot be independently audited.&lt;/p&gt;
&lt;p&gt;Signal retains almost no metadata. Its Sealed Sender feature means even Signal&#39;s servers lack a complete sender record for messages. Message history stays on-device. The application code is open to review.&lt;/p&gt;
&lt;p&gt;Signal&#39;s own limitation is worth stating: an account is tied to a phone number, which in most countries links back to a government-issued identity. Signal added usernames so users don&#39;t have to share that number, but the phone number remains the underlying account identifier. For someone who needs their identity protected — not just their messages — that&#39;s a structural limitation.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Key Takeaway:&lt;/strong&gt; While both apps share the same cryptographic foundation, their architectural priorities result in two very different privacy profiles.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style=&quot;text-align:left&quot;&gt;Feature&lt;/th&gt;
&lt;th style=&quot;text-align:left&quot;&gt;WhatsApp&lt;/th&gt;
&lt;th style=&quot;text-align:left&quot;&gt;Signal&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;strong&gt;Core Protocol&lt;/strong&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Signal Protocol&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Signal Protocol&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;strong&gt;Metadata Collection&lt;/strong&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;High (IP, Logs, Contacts)&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Minimal (Timestamps)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;strong&gt;Sender Anonymity&lt;/strong&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;No (Server knows sender)&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Yes (&lt;strong&gt;Sealed Sender&lt;/strong&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;strong&gt;Quantum Resistance&lt;/strong&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;No current implementation&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;strong&gt;SPQR&lt;/strong&gt; (as of 2025)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;strong&gt;Cloud Backups&lt;/strong&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Encrypted (Opt-in)&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;On-device only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;text-align:left&quot;&gt;&lt;strong&gt;Code Verification&lt;/strong&gt;&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Proprietary (Closed)&lt;/td&gt;
&lt;td style=&quot;text-align:left&quot;&gt;Open Source (Auditable)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Signal was asked three times, across multiple jurisdictions, for user data. Each time, it produced two timestamps.&lt;/p&gt;
</content>
  </entry>
</feed>