For 74 minutes on Monday, November 12, traffic from Google's data centers was hijacked, essentially a de facto distributed denial-of-service attack.
BGPmon, a firm which tracks the path of Internet traffic, was the first to report the hijacking. The firm observed as a Nigerian ISP -- MainOne -- electronically told other nodes in its network that it was now hosting a number of IP addresses that belonged to Google's data center. This was, of course, not true.
Customer behind Cogent and NTT experienced the @google outages likely in 5 waves between these times (UTC) 74 minutes total:— BGPmon.net (@bgpmon) November 12, 2018
21:13 - 21:17 4min
21:18 - 21:21 3min
21:22 - 21:28 6min
21:30 - 21:50 20min
21:51 - 22:32 41min
example ASpath: 174 2914 20485 4809 37282 15169
However, the routing announcement had the effect of diverting traffic that should have been sent to Google to be sent to MainOne.
Since MainOne has a "peering" relationship with China Telecom, the incorrect routes were propagated from China Telecom through TransTelecom to NTT and other transit ISPs. MainOne also has a peering relationship with Google -- through a relationship with IXPN in Lagos -- and direct routes to Google, which leaked into China Telecom as well.
ThousandEyes, another traffic monitoring firm, was also observing the situation. Researchers there noted in their blog that "the outage not only affected G Suite, but also Google Search as well as Google Analytics."
The specific path of the data raised eyebrows as well. First, the traffic to Google was getting dropped by the edge routers at China Telecom. Additionally, a Russian entity -- the TransTelecom ISP mentioned previously -- was found to be in the traffic path.
As ThousandEyes detailed it:
This also put valuable Google traffic in the hands of ISPs in countries with a long history of Internet surveillance. Overall ThousandEyes detected over 180 prefixes affected by this route leak, which covers a vast scope of Google services.
In the last few weeks, concern has been prevalent regarding China Telecom's past history of Border Gateway Protocol (BGP) takeovers, as delineated in a Naval War College article. The article has political overtones in its analysis, but the major findings have since been confirmed by Oracle's Internet Intelligence Division.
This existing concern created a high anxiety about the routing problem, especially since it was cutting Google's business services off from a large chunk of the world's users.
In its own blog post, Google noted: "Throughout the duration of this issue Google services were operating as expected and we believe the root cause of the issue was external to Google."
MainOne finally confessed to the world on Twitter on November 13.
We have investigated the advertisement of @Google prefixes through one of our upstream partners. This was an error during a planned network upgrade due to a misconfiguration on our BGP filters. The error was corrected within 74mins & processes put in place to avoid reoccurrence— MainOne (@Mainoneservice) November 13, 2018
In other words: "Move along, internet. Nothing to see here but a bad filter."
The way that the Internet routes traffic through itself is based on a protocol developed in the 1980s that has no security embedded. It is long overdue for a revamp. This latest episode shows how one node in the net can functionally take everything down by a simple mistake.
— Larry Loeb has written for many of the last century's major "dead tree" computer magazines, having been, among other things, a consulting editor for BYTE magazine and senior editor for the launch of WebWeek.