This post is about how hackers abuse popular web services, and how this helps security researchers obtain interesting statistics about malware attacks.
We, at Sucuri, work with infected websites every day. While we see some particular infections on one site or on multiple sites, we can’t accurately tell how many more sites out there are infected, and how many people were exposed to that malware. All we can do is estimate.
Most estimations are based on data that can’t provide the whole picture such as number of detections in our SiteCheck scanner, number of cleanup requests, number of posts about a particular problem in webmaster forums. This only helps to tell whether it’s something “major” or “minor”.
Like any other firm out there, sometimes we can make some good educated estimates. For example, we can target specific Google searches that reveal the number of sites that contain some text, or URL specific to a particular attack. Another example is if an attack uses one specific URL (or a few well known URLs), then Google Safe Browsing reports also help estimate number of infected sites. These Google-based approaches are more precise, but they don’t work for most attacks that frequently change domains and have no artifacts that can be found in search results.
If security researchers are quite lucky, they might find an attacker’s unprotected (or poorly protected) Control Panel that contains all the statistics about infected site, clicks, exploits, etc.
This post will be about a different and quite unusual way of obtaining data about activity of a server-level attack that is known for being hard to detect and track.