Distributed Denial of Service (DDoS) attacks have been a major concern for website owners for a while. All types of sites, from small to big, have been taken down and kept offline because of them. Even over-provisioned servers can be taken offline by the smallest of DDoS attacks; caused by IP addresses being null routed by hosting providers and kept offline for days. Websites behind load balancers and cloud infrastructures are also susceptible, since very few of them are designed to handle DDoS Attacks and the variety of ways they can happen.
DDoS Amplification
Every time we talk about DDoS, we have to mention the amplification effect. For a DDoS attack to be successful, the attacker has to be able to send more requests than the victim server can handle.
Most attackers leverage botnets consisting of compromised computers, allowing them to amplify their attack across the size of the botnet. One attacker can control 1,000 bots which can then be used to DDoS the victim. That’s 1,000 bots vs. 1 server, making it easier for the attacker to win.
The other aspect of amplification has to do with the network-layer and spoofed requests. What if each computer on the botnet only needs to send 1 byte to get a 100 byte response? That’s called a 100x amplification. When the request is spoofed, the reply goes back to someone else on the Internet, not the attacker. This means the network port of the server is processing the 1 byte incoming + 100 bytes outgoing, while the attacker only processes the 1 byte outgoing on their end.
When we add it all together: thousands of bots + sending hundreds of requests per second + the network-layer amplification… you can see how hard it is for most servers to handle them. A theoretically small bot net of 1,000 bots can easily generate close to 100Gbps when using the right amplification method.
Amplification does not stop there. Most people tend to think of them only in terms of Gigabytes of network, but there is also something happening at the application layer. What if, with just 1 HTTP request from the botnet (which is very cheap to do) the attacker can force a web application to do a lot work? Like an expensive search or something that takes lots of resources? That’s the basis of many Layer 7 (HTTP flood) attacks that we see.
Network-Layer DDOS
Network-layer DDoS attacks are how we classify the multitude of attacks that try to exploit your network stack by sending either more packets than what your server can handle or more bandwidth than what your network ports can handle.
We classify Syn Floods, Ack Floods, UDP-based amplification attacks (including DNS, SSDP, NTP, etc) all as network-layer DDoS attacks. Based on our internal data, close to 50% of all attacks fit in this category. The other 50% fall into the application-layer attack category.
Network-layer attacks are the ones we read often in the media that can generate a very high amount of bandwidth and are attributed to the disruption of service on major sites. Arbor, in their 2015/Q1 Global DDoS Attack Trends Report, disclosed that 17% of all attacks they handled were bigger than 1Gbps and the average size of the attacks was 804Mbps / 272K pps. The big ones peaked at 335Gbps.
Now, when you think that most cloud VPS servers can barely handle 100,000 packets per second, you can see how even the smaller of the attacks can take most servers down. Hosting providers like Linode, Softlayer and even Amazon will null route your server IP for hours if they detect even a small DDoS against your server.
Application-Layer DDoS (HTTP Floods)
Application layer attacks are the category that we really want to focus on this post. They can be silent and small, especially when compared to network-layer attacks, but what many fail to realize is that they can be just as disruptive.
A small VPS on Linode, Digital Ocean or AWS (Amazon) can easily handle a 100,000 to 200,000 packets per second SYN flood. However, the same server, running a WordPress or Joomla CMS can barely break 500 HTTP requests per second without going down. See the difference?
Application-layer attacks generally require a lot less packets and bandwidth to achieve the same goal: take down a site.
The reason for that is that these attacks focus on the web application layer, which generally includes hitting the web server, running PHP scripts and contacting the database just to load one web page. When you think about the amplification effect we discussed before, one HTTP request that is very cheap to execute on the client side can cause a server to execute a large number of internal requests and load numerous files to create the page.
On application-layer attack, the amplification is CPU, memory or resource-based, not network-based.
Layer 7 Categorization
We categorize the HTTP Floods (Layer 7 DDoS attempts) into 4 major categories:
- Basic HTTP Floods: Common and simple attacks that try to access the same page over and over. They generally use the same range of IP addresses, user agents and referrers.
- Randomized HTTP Floods: Complex attacks that leverage a large pool of IP addresses and randomized the URLs, useragents and referers used.
- Cache-bypass HTTP Floods: A sub-category of the randomized HTTP Floods that also try to bypass web application caching.
- WordPress XMLRPC Floods: A sub-category that uses WordPress pingback as a reflection for the attacks.
We see each of these very often, with the majority being randomized HTTP floods, followed by cache-bypass floods. Both are actually the most dangerous for web applications, since it forces them to do the most work per request. This is the category break down of L7 attacks:
Application-Layer Attack Sizes
Over the last 6 months, we have been categorizing all HTTP floods that we see in the wild. Any attack that lasts more than 30 minutes and generates at least 1,000 HTTP requests per second is getting cataloged. We will share what we are seeing so far this year:
I. Attack duration
On average, the attack duration for the HTTP floods is 34 hours, with the longest one lasting for a continuous 71 hours. This means more than a day offline for most targets (i.e., website[s]). We see a lot of very short attacks, lasting 5-15min, but these aren’t added to the stats.
II. Attack size
On average, the attacks generate 7,282 HTTP requests per second. That number is a bit skewed because of some very large attacks, so the median probably gives a better number here. The median is 2,822 HTTP requests per second.
The peak attack generated over 49,795 HTTP requests per second.
III. Botnet Size
Most attackers leverage botnets for their attacks. The benefit of layer 7 attacks that come from botnets is that we can see the real IPs being used since they can’t be stopped.
On average, the attackers botnet leveraged 11,634 different IP addresses, with a median of 2,274 IP addresses
The largest botnet used 89,158 different IP addresses.
IV. Victim Profiles
We won’t dive much into the profile of the victims, but overall they were divided into these four major business categories:
The rest were divided into hosting businesses, blogs, social sites and generic business sites. Most attacks seem to be revenge or competition based, but we don’t have actual data on that. Very few of them demanded ransom for the attack to be stopped.
Conclusion
We hope we were able to provide some insights on the size and types of the Application layer (HTTP Floods) attacks we are seeing in the wild and help bring more attention to this type of threat.
If you have any additional question or would like to know something specific from our data, let us know.
6 comments
“A small VPS on Linode, Digital Ocean or AWS (Amazon) can easily handle a 100,000 to 200,000 packets per second SYN flood. However, the same server, running a WordPress or Joomla CMS can barely break 500 HTTP requests per second without going down.”
How did you come to this conclusion? What metrics? What HTTP server? In my experience WordPress behind Varnish or Nginx caching can scale out to 10,000+ requests per second on a minimal cloud instance.
Based on experience on what we see out there. The majority of sites use the LAMP stack (linux, apache, mysql and PHP) and they really suffer under these numbers.
A ddos attack would be directed at a sensitive page like a login for example, so it would cost more resources. Its not just any request
I’m also managing WP sites, one of which has an average of 5-7K visitors per second.
With just Linode 4GB droplet, Using Nginx+Redis Cache.
He’s referring to a SYN flood attack of 200K per second. His point is that an attack will cripple your systems, not whether those services perform well.
Yep, exactly. That was just an example 🙂
Comments are closed.