Cloudflare IPv6 Route Leak Causes 25 Minute Disruption
Cloudflare has published a technical post-incident report about a BGP route leak on January 22, 2026. The issue redirected IPv6 traffic through Cloudflare’s Miami, Florida data center. Cloudflare says an automation mistake pushed a faulty routing policy. As a result, one edge router advertised certain BGP prefixes to external networks. That shift affected Cloudflare customers and also pulled third-party traffic into Miami.
What happened
Cloudflare says the leak started at 20:25 UTC. At that time, automation ran on a single Miami edge router. It then sent unexpected route advertisements to transit providers and peers. The disruption lasted about 25 minutes. An operator rolled back the change and paused automation on that router. After that, traffic patterns returned to normal.
Source: cloudflare.com – Cloudflare route leak example showing peer routes leaked to a transit provider
Importantly, the incident only affected IPv6 traffic. Because of the route leak, Miami backbone paths became congested. Some users saw higher latency. Others saw packet loss.
Root cause
Cloudflare traced the problem to a routing policy change. The company wanted to stop sending some IPv6 traffic through Miami toward Bogotá. Earlier infrastructure upgrades made that path unnecessary. However, the new policy removed certain prefix-list matches. Because of that, the export policy became too permissive.
Cloudflare explains that on JunOS and JunOS EVO, a match on route-type internal can include routes beyond what engineers expect. For example, it can match iBGP routes. Because of this behavior, the router treated internal IPv6 prefixes as eligible for export. It then advertised them to external BGP neighbors in Miami.
What the leak did to traffic
Once providers and peers accepted the leaked prefixes, they forwarded more traffic toward Miami. That shift created congestion between Miami and Atlanta. Consequently, packet loss increased and latency rose.
Cloudflare also reported heavy filtering on the affected edge. Router firewall filters dropped traffic for prefixes the edge should not accept. At peak, dropped ingress traffic reached about 12 Gbps in Miami.
Timeline highlights
Cloudflare shared a clear timeline for the event.
- 19:52 UTC: Engineers merged the change that triggered the bug
- 20:25 UTC: Automation ran on one Miami router and the impact began
- 20:40 UTC: The network team started investigating unexpected advertisements
- 20:50 UTC: The operator reverted the config and paused automation, impact stopped
- 21:47 UTC: Engineers reverted the triggering change in the repository
- 22:40 UTC: Cloudflare unpaused automation after validation
Fixes and preventive changes
Cloudflare says it is fixing the automation gap that allowed the bad policy to ship. In addition, it plans stronger routing guardrails to reduce future blast radius.
Planned changes include community-based protections that reject routes learned from providers and peers on external export policies. Cloudflare also plans automated routing policy checks in CI/CD to catch empty or unsafe terms. Moreover, the company wants faster detection for risky automation outcomes. Finally, Cloudflare mentioned validation work toward RFC 9234 Only-to-Customer behavior and continued support for RPKI ASPA to help detect abnormal AS paths.
Cloudflare apologized for the disruption. The company said it will strengthen routing automation and route leak controls going forward.
An automated routing policy configuration error caused us to leak some Border Gateway Protocol (BGP) prefixes unintentionally from a router at our Miami data center. We discuss the impact and the changes we are implementing as a result.https://t.co/6vtdNMhYlN
— Cloudflare (@Cloudflare) January 23, 2026
No Comment! Be the first one.