Early this morning, we were alerted to some slowing on our network. We were able to access all of the servers, but access from some of our testing sites was slow. We contacted our data center and began working with them to troubleshoot the issue. We found that a link in Seattle appeared to be where the bottleneck was and contacted that provider. They looked into it and told us that they would have it fixed shortly. As time wore on, we found it was a Denial of Service attack on some of the routers that our traffic travels through in Seattle.
The data center routed us around the issue, later than we would have like, because of the assurances that it was going to be resolved, and Smile service was returned to normal.
Later in the day, we received this notice from the upstream provider:
At 05:00PDT a dDos with a high volume of small udp packets targeted at one
customer’s host began ramping up. By 05:30 the attack was in full force
but impact to our network lagged behind as our normal daily traffic cycle
began it’s increase.
The attack caused packet buffer overflows on our interfaces (router
interfaces were thowing away good and bad packets). At 08:30 we applied
filters on our border which helped stabilized our core and decreased the
impact of the attack but the interfaces to our transit providers and
peerings were still discarding packets. Customers would have seen latency
and loss on many of our connections to transit providers and peers.
At 10:15PDT we contacted all 5 of our providers and had them null route
the target network thus keeping traffic from reaching our border routers.
Traffic has returned to normal levels and balanced as we’d expect.
This was not the result of a failure within our network but rather a
resource starvation issue on our interfaces due to the overwhelming
number of small packets in this distributed denial of service attack.
Thank you for your patience as we worked to isolate and neutralize the
impact to your service.