One Deleted Line of Code Rerouted the Internet
On January 22, 2026, nine lines were removed from a configuration file in Miami. Twenty-five minutes later, Cloudflare's engineers were manually reverting the change. Here is exactly what happened — and what it reveals about the protocol routing all global internet traffic.
On January 22, 2026, at 20:25 UTC, an automated script ran on a single router in Cloudflare's Miami data center. Routine maintenance: removing a prefix list that was no longer needed after an infrastructure upgrade in Bogotá. The change had been reviewed. It looked clean. Nine lines, deleted.
Twenty-five minutes later, Cloudflare's network engineers had manually reverted the change and paused all automation. In that window, approximately 12 gigabits per second of traffic had been dropped. External networks across multiple continents were affected. A router in Florida had spent 25 minutes telling the global internet to route traffic through paths it was never supposed to use.
No data was stolen. No systems were permanently damaged. But the incident exposed something the internet's architects have known for decades and never fully fixed: the protocol that routes all global internet traffic is built on trust, not verification. And trust, at internet scale, fails in ways that are difficult to predict and impossible to fully prevent.
The Protocol That Runs Everything
BGP — the Border Gateway Protocol — is the system by which the internet's 75,000 networks tell each other where to send traffic. Every network, called an autonomous system, announces to its neighbors which IP address ranges it owns. BGP collects those announcements into a continuously updated routing table: a live map telling every network on the planet the best path to every destination.
The map works because networks trust each other's announcements. If Cloudflare says it is responsible for a particular block of addresses, its neighbors believe it and propagate that information to their neighbors, who propagate it to theirs. The announcement spreads across the internet in seconds.
There is no cryptographic verification. A network claiming ownership of an address block is taken at its word. This is not an oversight — it was a deliberate architectural decision made in 1989, when the internet was a small network of research institutions and adversarial behavior was not a design consideration. BGP was built for a different internet. It now runs a very different one.
What Actually Happened in Miami
The January 22 incident was not a hack. It was a configuration error — the kind that happens when automation removes a constraint that was doing more work than anyone realized.
Cloudflare's engineers were cleaning up BGP announcements from Miami that related to a Bogotá data center no longer needing them. The change removed nine prefix list references across several export policies. A sample of the diff, showing the removal for two of those transit providers, looked like this:
[edit policy-options policy-statement 6-TELIA-ACCEPT-EXPORT term ADV-SITELOCAL-GRE-RECEIVER from]
- prefix-list 6-BOG04-SITE-LOCAL;
[edit policy-options policy-statement 6-LEVEL3-ACCEPT-EXPORT term ADV-SITELOCAL-GRE-RECEIVER from]
- prefix-list 6-BOG04-SITE-LOCAL;
Nine lines removed. Change reviewed and merged. Automation pushed it to the router.
The problem was what remained. Without that specific prefix list acting as a boundary, the export policy defaulted to its surviving rule: route-type internal — an instruction that, in the operating system running on these routers, essentially means "share everything we know about our own internal network."
Cloudflare had accidentally removed the filter keeping its internal traffic map private. The Miami router picked up a megaphone and started broadcasting Cloudflare's private internal routes to the public internet — telling every network on the planet that the best path for that traffic ran through Florida.
The internet believed it. Traffic arrived from providers and peers who had no reason to question the announcement. BGP does not verify, it routes. The Miami data center was not built to handle it. Firewall filters designed to accept only Cloudflare's own traffic started dropping packets. Congestion built on backbone links. Legitimate customer traffic was delayed or lost.
Cloudflare's team detected the anomaly within 15 minutes. A network operator manually reverted the configuration. Twenty-five minutes after it started, it was over.
The Automation Paradox
What makes this incident instructive is not that it happened — BGP route leaks happen regularly, to networks of every size — but what triggered it. Not a tired engineer making a mistake at 2am. A change reviewed and merged through Cloudflare's policy automation platform. A change that looked correct because, in isolation, it was. The deleted lines were unnecessary. The problem was the interaction between what was removed and what remained — a condition no reviewer caught.
This is the central tension of modern internet infrastructure: the same tools that make it possible to manage tens of thousands of routers — infrastructure-as-code, policy automation, configuration management pipelines — are the tools that can turn a local error into a global event in seconds. A manual change affects one router at the pace a human can type. An automated change hits every router the platform touches, simultaneously, at machine speed.
The pattern has a clear trajectory. The 2008 Pakistan Telecom incident that took YouTube offline for two hours: a manual misconfiguration by a single engineer. The 2019 Verizon incident that disrupted large portions of US internet traffic: a misconfigured BGP optimizer, an automated system. The 2021 Facebook outage that took down Facebook, Instagram, and WhatsApp globally for six hours: a command sent to the global backbone through remote access tooling. Each incident more automated than the last. Each one propagated faster.
The network engineering community has a phrase for this: BGP is correct until it isn't. When it isn't, it propagates that incorrectness to every network that trusts it — which is all of them.
Why Redundancy Doesn't Help
When the internet breaks, the standard response is redundancy: multiple ISPs, backup data centers, failover routing. Against a BGP leak, redundancy is theater.
If your primary connection goes down, your router switches to a backup. But if your backup provider has also accepted the poisoned BGP route — which it will, because BGP trusts its neighbors — it will faithfully route your traffic into the same black hole. Both paths lead to the same wrong destination. Think of it as a GPS outage that affects every map app simultaneously. Having two apps doesn't help if both are pulling from the same corrupted signal.
Firewalls and encrypted VPNs are equally useless here. They operate above the routing layer. If the road itself has been rerouted off a cliff, it doesn't matter how secure your car is. The protection mechanisms most people rely on assume the underlying routing is correct. When it isn't, they have nothing to work with.
The Fix That Exists and Isn't Deployed
There is a solution to BGP's trust problem. RPKI — Resource Public Key Infrastructure — adds cryptographic verification to BGP route announcements. A network using RPKI signs its announcements with a digital certificate tied to its registered IP address allocations. Think of it as a passport for routing data: it proves the network actually owns the addresses it claims to represent. Networks that validate those signatures can automatically reject announcements that don't match — catching both accidental leaks and deliberate hijacks before they propagate.
RPKI alone wouldn't have entirely prevented the Miami incident, since the leak originated from within Cloudflare's own legitimate network. But pairing it with the newer ASPA standard — Autonomous System Provider Authorization — would allow downstream networks to detect and drop these specific routing anomalies before they spread. As of early 2026, RPKI covers roughly 40 percent of global routing. The remaining 60 percent has no cryptographic protection whatsoever. The standard exists. Adoption is voluntary and slow.
What You Can Actually Do With This
You cannot personally deploy RPKI. What you can do is change how you diagnose outages.
If your team suddenly loses access to a major cloud service — Microsoft 365, AWS, Google Workspace — while the rest of your internet appears to work normally, stop rebooting local hardware. Stop calling your ISP. The problem may be the global routing map, not your connection. Check Cloudflare Radar or BGPStream. If a route leak is active, local troubleshooting is useless. You are waiting for a network operator somewhere on the internet to manually correct the map. Depending on the incident, that takes minutes or hours.
The Larger Problem
The Miami incident lasted 25 minutes. It was caught because Cloudflare has sophisticated monitoring and a team that responded within 15 minutes. That's a fast response by any measure. It was still long enough to cause measurable impact across multiple continents.
BGP hijacks used to intercept traffic — routing it through a third-party network where it can be inspected before being forwarded to its intended destination — are documented and ongoing. The same trust model that made the Miami incident possible makes those attacks possible too. The difference is intent.
BGP's designers in 1989 were solving a connectivity problem, not a security problem. The protocol has been patched and extended — RPKI, ASPA, BGP roles — but the trust model at its foundation hasn't been replaced. It can't be replaced without coordinated action from every major network on earth, which is why it hasn't happened.
Nine lines of configuration code. Twenty-five minutes. Twelve gigabits per second of dropped traffic. The incident is resolved. The conditions that made it possible are not.
Cloudflare's full incident report for January 22, 2026 is available at blog.cloudflare.com.