Web Server Attacks – Apache Modules, Log Management and RELM

New year, same tricks, mostly because they work. That’s how we’re kicking off the new year folks.

In September of 2012, Dennis, of Unmask Parasites, first wrote about rogue apache modules being injected into web servers. It has since been all the rave. It seems every week we’re handling more and more cases, from private servers to large enterprises, being impacted by the same issue. As for the vector, in a good number of instances it comes down to access and in others vulnerabilities in software, software like PLESK.

What we have started to see is an evolution in these attacks. In one such case we saw two modules injected into the server. One was legitimate and was referencing another illegitimate module. Normal tactics failed to disclose it’s location. Monitoring the traffic of the server using tools like TCPDUMP did in fact show the infection was still present. We briefly wrote about some of these evolutions in a recent post, in which we articulate some of the things we are seeing. Fortunately, a lot of this comes down to the basics of knowing what your servers are running and what they are designed to do.

It’s for this reason that we’re pleading with organizations to apply better practice when managing their web servers. These servers are sitting between you, your environment, and your followers. They are prime targets and less and less focus is being placed on them.

Things you need to be doing:

  • Monitor your httpd.conf file (e.g., /etc/httpd/conf/httpd.conf)
  • Check the modules being loaded in your modules directory
  • Become vigilant with your logs
  • Practice the art of isolation


Read More

The Importance of logging for web applications – Security talk

If you think that your logs are only useful when something crashes or when you need to troubleshoot errors on your web application, think again!

At our Sucuri Labs, we have multiple online tools and we have good logging on all of them. We not only log errors, but also successful requests. For example, on our Application to get the real URL from a shortened one, this is how it looks when someone uses it:

2010-03-04 05:56:54 [srcip] Check URL for http://bit.ly/XYZ.
2010-03-04 05:57:01 [srcip] Check URL for http://bit.ly/ABC.

Yes,that gets logged on our internal success log. When something fails, or someone gives us an invalid URL, thats how it looks like:

2010-03-04 06:45:37 [srcip] Check URL: Invalid domain name ‘google’..

That gives us an overview of what our users are doing and what mistakes they make more often. In this case, the user tried the domain “google”, without the .com at the end.

That’s very useful from a usability stand point, but from a security perspective, logs can be much more useful. We use those web application logs for at least 3 things:

  1. Detect attacks
  2. Detect application misuse
  3. Detect errors (loss of availability)

1 – Detecting attacks

Users are curious. Most of them are not malicious, but they certainly like to play around and look for vulnerabilities. One user we noticed tried our web scanner against his web site.

2010-02-21 06:52:14 115.49.x.y scanner: Site: www.xx.it

A few minutes after, he started to poke around looking for SQL injections:

2010-02-21 06:52:34 115.49.x.y scanner: Invalid site: www.xx.it’`([{^~'
2010-02-21 06:52:41 115.49.x.y scanner: Invalid site: www.xx.it aND 8=8'
2010-02-21 06:52:41 115.49.x.y scanner: Invalid site: www.xx.it aND 8=3'
2010-02-21 06:52:49 115.49.x.y scanner: Invalid site: www.xx.it' aND '8'='8'
2010-02-21 06:52:57 115.49.x.y scanner: Invalid site: www.xx.it' aND '8'='8'
2010-02-21 06:53:09 115.49.x.y scanner: Invalid site: www.xx.it/**/aND/**/8=8'
2010-02-21 06:53:35 115.49.x.y scanner: Invalid site: www.xx.it%' aND '8%'='8'
2010-02-21 06:53:48 115.49.x.y scanner: Invalid site: www.xx.it XoR 8=8'

He then got blocked automatically by our system. Without those logs, he could have tried and tried forever and we would never notice that. Our application was safe against it, but why let an attacker play around? To block those attacks, we use OSSEC with their active response, which blocks an attacker after 10 invalid attempts.

That's how our OSSEC rule looks like:


Invalid site:
User provided an invalid site.


100906

Warning: Multiple invalid sites provided.

The first rule generates a low level event for each invalid site provided and the second one, actually blocks and generates an alert if it happens more than 10 times from the same source ip.

2 - Detecting application misuse

Detecting application misuse is very similar to detect attacks. Except that in this case the user is not trying to hack us, but use our application in ways we don't allow. For example:

2010-02-24 07:22:50 129.21.a.b scanner: Invalid site: 'site:22'..
2010-02-24 07:30:47 129.21.a.b scanner: Invalid site: 'site:25'..

Instead of giving us a valid domain, the user was trying to give us a port to scan (in this case 22 for ssh and 25 for smtp). We were safe against this, but it raises our awareness of possible ways to misuse our application.

So, why not add some good logs to your application? We didn't go over detecting errors, because everyone does that already, but you also need to log successful attempts and invalid user input too! A simple write_log function before you print any error back to the user, should do it. For example:


function write_log($msg)
{
$INT_LOGFILE = "/var/logs/myapp/myapp.log";
if ($handle = fopen($INT_LOGFILE, 'a'))
{
fwrite($handle, date('Y-m-d h:i:s ').$_SERVER['REMOTE_ADDR'].” “.$msg. “.\n”);
fclose($handle);
}
}

Just remember to log outside your web directory! You don't want anyone else accessing your logs!

VMware insecure file creation

If you are using the free VMware server on Linux, beware that the installer is creating files with insecure permissions, allowing any user to modify them.

I downloaded the latest VMware server (VMware-server-2.0.2-203138.i386) and followed the step-by-step installation script. After it was completed, OSSEC (always to the rescue) sent me a bunch of alerts about new insecure files:

File ‘/usr/lib/vmware/hostd/docroot/print.css’ is owned by root and has written permissions to anyone.
File ‘/usr/lib/vmware/hostd/docroot/client/clients.xml’ is owned by root and has written permissions to anyone.
File ‘/usr/lib/vmware/hostd/docroot/sdk/vim.wsdl’ is owned by root and has written permissions to anyone.
File ‘/usr/lib/vmware/hostd/docroot/sdk/vimService.wsdl’ is owned by root and has written permissions to anyone.
File ‘/usr/lib/vmware/hostd/docroot/sdk/vimServiceVersions.xml’ is owned by root and has written permissions to anyone.
File ‘/usr/lib/vmware/hostd/docroot/error-32×32.png’ is owned by root and has written permissions to anyone.

And these are just some of them. Everything under /usr/lib/vmware was created with 777 permissions (open for anyone to read and modify), including the vmware-server-distrib and other directories.

So, if you run vmware on a system that someone else have normal user access, you might want to “chmod -R o-rwx” to avoid problems.

*just verified on another system, with the same effect. Tried on Ubuntu 9.04 and CentOS 5.3
*My umask is set properly as 0022.

Process monitoring with OSSEC

OSSEC v2.3 was just released and one feature that really interested me was the Process monitoring. That’s what the OSSEC team says about it:

“We love logs. Inside OSSEC we treat everything as if it was a log and parse it appropriately with our rules. However, some information is not available in log files but we still want to monitor them. To solve that gap, we added the ability to monitor the output of commands via OSSEC and treat those just like they were log files.”

Basically, it allows you to monitor the output of any command and generate alerts/active responses from them.

Cool, let’s try it out. First, let’s monitor the output of “httpd status” to receive alerts if Apache ever goes down. I added the following command to my ossec.conf and the following rule to my local_rules:


command
/etc/init.d/httpd status


530
ossec: output: ‘/etc/init.d/httpd status’:
is stopped
Apache STOPPED.

Now, if I manually stop Apache to try it out, I get in a few seconds via email:

2009 Dec 08 10:45:04 (sucuri) xx->/etc/init.d/httpd status
Rule: 100200 (level 10) -> ‘Apache STOPPED.’
Src IP: (none)
User: (none)
ossec: output: ‘/etc/init.d/httpd status’: httpd is stopped

Perfect! Now I can have all my monitoring in just one tool… Next step is to create an active response to restart the service on failure.

OSSEC online tests

We just added two tools to generate OSSEC rules online.