As website attacks continue to evolve, we see growing levels of sophistication in the way attackers are expanding the economics of their industry. The monetization potential of attacking websites continues to grow exponentially as more websites come online (currently estimated to be over a billion live sites). With this evolution comes new tactics that we’ve already seen in the wild, which includes shifting focus from endpoints (i.e., the visitors of websites, devices) to the website owners themselves.
Attacks are moving the target from the distribution mechanism of the website to the website owners themselves. We’ve seen it with the growth in ransomware targeting website owners and now with Distributed Denial of Service (DDoS) attacks designed to disrupt a website’s availability.
Website Availability – The Growing Attack Vector
Availability is a critical part of any website’s existence. One could argue that if you’re unable to keep your web property up and running, why have one at all?
The severity of this notion varies greatly depending on how dependent your business is on having a website. To the business owner who depends on their online property – to represent their brand, connect with their customers, and possibly perform some form of online commerce – the idea of their website being inaccessible is unthinkable. The impacts would obviously vary greatly to those who have a website simply because the market demands it, but whether it’s up or down doesn’t have a direct effect on the operations of the business. It all comes down to perspective.
In security, one of the core tenants is availability. It’s something that all website owners should be thinking about. For most, though, availability is something that should be guaranteed, something that is as sure as the air we breathe – but it’s not. We pay for our hosting, we pay our developers, we employ the best security controls to ensure we don’t get hacked, we invest great sums in our marketing techniques. Everything is as it should be. Or is it?
Attackers are growing wise. As technologies continue to make advancements, platforms become more secure, website owners become more educated, and attack surfaces become more complicated – like water flowing downhill – attackers will find the path of least resistance. At some point, availability becomes the most obvious target!
The concept of attacking a website’s availability is not new. It’s been around for a long time, and there are many organizations that have teams implementing processes and controls to thwart these types of attacks. This is not something many website owners have given much thought to. Now, instead of only worrying about an attacker successfully penetrating a websites defenses, website owners must also be worried about their websites being disabled.
A perfect example of this evolution can be seen in the cybercriminal group DD4BC (Distributed Denial of Service -DDoS for Bitcoin). While this group initially targeted the online gambling and financial industries, it was only the beginning of these types of attacks and extortion attempts. While most organizations are not at the scale of these industries, there is still money to be made by attacking even the smallest website owners who depend solely on their online presence to sell their goods and services. There has already been a slow trickle of extortion campaigns targeting a few website owners in which they are threatened with disrupted service unless a ransom is paid.
Now put yourself in this position. Think of your website and the value it offers your organization. What would it mean to you, your business, if it was offline for a day, a week, a month? What if someone were able to hold its availability hostage?
Denial of Service Attacks
As we move into the latter parts of 2016 and 2017, availability will come to the forefront of discussions with website owners. In the various groups I engage with, I see it being discussed more and more, yet there is a fundamental lack of understanding about what they’re dealing with and how to address it.
It’s not something that many website owners, or their hosts, are prepared to address. Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks are threats that website owners must familiarize themselves with as they will become a critical piece of the security landscape in the months and years to come:
In a denial-of-service (DoS) attack, an attacker attempts to prevent legitimate users from accessing information or services. – US CERT
These attacks have become easy to employ with the proliferation of the DDoS-for-hire service market (a.k.a booter services). They have often been employed in targeted attacks and have placed emphasis on large organizations. You might also be thinking of the massive 200+ Gigabits per second (Gbps) attacks from 2014, but that’s not what I’m most concerned about in this post. What was once an anomaly is quickly becoming the norm.
My real focus is on attacks designed specifically to target the web applications themselves. The attacks you’ve heard of in the past have predominantly been those that target the network, or Layer 3 and 4 of Open System Interconnection (OSI) model. This is something that you would expect your host provider to address via their infrastructure and network-based firewalls. Unfortunately, they too struggle with these attacks. Volumetric attacks are a fight for pipe or bandwidth and most hosts are not designed or configured to mitigate them.
Side Note: The OSI model defines 7 conceptual layers. More on that another day.
The real problem for website owners however is the growth in application-focused attacks, or Layer 7 HTTP flood attacks. These attacks are measured in Requests per Second (RPS) and focus on resource exhaustion at the web server, while network attacks are measured in either Bits Per Second (BPS) or Packets Per Second (PPS). With application-focused attacks, the volume is significantly less. Instead of seeing attacks measured as 100 Gbps, you might see a website crash after 1,000 RPS. Worse yet, with most of the popular hosts out there you’ll find that if you threaten the availability of a server you’re sharing, you’re likely to get null-routed (disabled) by the host before the server itself fails.
Implications to Website Owners
Website owners will struggle to understand the intricate details of these attacks. Not that they need to, but the expectation will be that the security controls they’ve implemented will adequately address the problem. Unfortunately, they won’t.
Maybe they log in one day to find their website is not opening, or they’re greeted with an error message.
This could be disastrous to a blogger whose livelihood depends on content distribution or ad revenue. Think of yourself as a business owner who depends on online commerce (ecommerce). How would you be prepared to handle this? The answer, “My host will take care of this,” is incorrect.
Most hosts are ill-prepared to address the problem of application-based attacks. This is also not something that will be solved at the application layer. In fact, because of the resource-intensive nature of these tools, and the overall hosting ecosystem, any application security tools trying to thwart these issues will likely become part of the problem because of the local resource consumption required.
This will be especially challenging for shared hosting accounts, where an attack on another site on the same server forces the entire server to be disabled, inadvertently affecting other websites.
8 comments
So what’s the solution!? I feel like your article ended at the halfway point!
That’s a good point, should go back and update it.
In my opinion, the solution will be cloud-based solutions that extend beyond the hosting environment..
Ah okay. I’d love to hear some of the reasons why cloud-based solutions could help with this problem. I look forward to an update! Thanks so much Tony.
That’s a good idea, you could start by taking a look at this article.. https://blog.sucuri.net/2016/04/ask-sucuri-differentiate-security-firewalls.html
I second that Tony. Thank you!
Good article Tony, I too would like to hear more about the cloud-based solutions. Thank you
How come internet providers are not able to recognize DDoS attacks from their network/customers? If we could make internet providers cut off DDoS attacks from the source we would have less attacks and more bandwidth saved.
Currently there is no much responsibility in DDoS attack chain. Users with zombie machines are oblivious of any unknown network traffic from their machines, they don`t know what happened, why it happened, no one even notifies them what they caused. Internet providers only care to throttle or shutdown your connection if you have torrent or some other piracy like traffic. Anarchy.
The problem is on how DDoS botnets work. If the attackers have 50,000 devices on their botnets, each device only needs to send 1 HTTP request per second in order to create a massive 50,000 HTTP RPS attack on the victim.
That’s why ISPs have a very hard time to differentiate and identify if a device is being used to DDoS others.
On the other hand, if they would block any spoofed IP addresses outbound, it would solve a lot of issues, making harder for attackers to hide behind spoofed addresses.
Comments are closed.