Archives for August 2009 hacked? was down for a while this morning and shortly after they released a note about a compromise:

This is a short overview of what happened on Friday August 28 2009 to the services. A more detailed post will come at a later time after we complete the audit of all machines involved.

On August 27th, starting at about 18:00 UTC an account used for automated backups for the ApacheCon website hosted on a 3rd party hosting provider was used to upload files to The account was accessed using SSH key authentication from this host.

To the best of our knowledge at this time, no end users were affected by this incident, and the attackers were not able to escalate their privileges on any machines.
While we have no evidence that downloads were affected, users are always advised to check digital signatures where provided. runs FreeBSD 7-STABLE and is more widely known as Minotaur serves as the seed host for most websites, in addition to providing shell accounts for all Apache committers.

The attackers created several files in the directory containing files for, including several CGI scripts. These files were then rsynced to our production webservers by automated processes. At about 07:00 on August 28 2009 the attackers accessed these CGI scripts over HTTP, which spawned processes on our production web services.

At about 07:45 UTC we noticed these rogue processes on, the Solaris 10 machine that normally serves our websites.

Within the next 10 minutes we decided to shutdown all machines involved as a precaution.

After an initial investigation we changed DNS for most services to, a machine not affected and provided a basic downtime message.

After investigation, we determined that our European fallover and backup machine,, was not affected. While the some files had been copied to the machine by automated rsync processes, none of them were executed on the host, and we restored from a ZFS snapshot to a version of all our websites before any accounts were compromised.

At this time several machines remain offline, but most user facing websites and services are now available.

We will provide more information as we can.

This prove what we have been saying for a long time:

  • No one is safe
  • Most compromises use simple methods, like password guessing
  • Monitoring is very important! mass defaced again

It seems that has been mass defaced again. It is not the first time I hear about them, but it seems they get hacked way too often.

My suggestion: Host your pages on private/dedicated servers (some are as cheap as 29,99 per month). I mean, if you care about your site/blog…

WordPress <= 2.8.3 Remote admin reset password

How to annoy a wordpress admin? By changing his password without confirmation…

WordPress <= 2.8.3 Remote admin reset password

The way WordPress handle a password reset looks like this:
You submit your email adress or username via this form
/wp-login.php?action=lostpassword ;
WordPress send you a reset confirmation like that via email:

Someone has asked to reset the password for the following site and username.
Username: admin
To reset your password visit the following address, otherwise just ignore
this email and nothing will happen


You click on the link, and then WordPress reset your admin password, and
sends you over another email with your new credentials.

Let’s see how it works:

line 186:
function reset_password($key) {
global $wpdb;

$key = preg_replace(‘/[^a-z0-9]/i’, ”, $key);

if ( empty( $key ) )
return new WP_Error(‘invalid_key’, __(‘Invalid key’));

$user = $wpdb->get_row($wpdb->prepare(“SELECT * FROM $wpdb->users WHERE
user_activation_key = %s”, $key));
if ( empty( $user ) )
return new WP_Error(‘invalid_key’, __(‘Invalid key’));
line 276:
$action = isset($_REQUEST[‘action’]) ? $_REQUEST[‘action’] : ‘login’;
$errors = new WP_Error();

if ( isset($_GET[‘key’]) )
$action = ‘resetpass’;

// validate action so as to default to the login screen
if ( !in_array($action, array(‘logout’, ‘lostpassword’, ‘retrievepassword’,
‘resetpass’, ‘rp’, ‘register’, ‘login’)) && false ===
has_filter(‘login_form_’ . $action) )
$action = ‘login’;

line 370:


case ‘resetpass’ :
case ‘rp’ :
$errors = reset_password($_GET[‘key’]);

if ( ! is_wp_error($errors) ) {


…[snip ]…

You can abuse the password reset function, and bypass the first step and
then reset the admin password by submiting an array to the $key variable.

A web browser is sufficiant to reproduce this Proof of concept:
The password will be reset without any confirmation.

An attacker could exploit this vulnerability to compromise the admin account
of any wordpress/wordpress-mu <= 2.8.3

The patch? Just a one liner fix… The problem? They are still using blacklists instead of a whilelist of what should be accepted…

The curiosity killed the cat

During the last few months we have been releasing numerous free online tools at Sucuri. One to scan web sites for security issues, another one to text-browse sites, one to check if a twitter account is spammer and a few more.

Some have been more famous than others, but one think I noticed in all of them: Users were always testing our security. Always.

I am not saying that users were in fact trying to hack us, but that they were just curious to see how we responded to “different” (malicious) inputs. Plus, being a site focused on web security probably increased their interest.

One thing to note is that we developed our application from scratch and took the follow precautions:

  1. We logged everything. You have no idea what kind of errors you will get into, so we made sure to log every request and every error. Lots of the attacks we will show would never get noticed if we didn’t.
  2. We were very restrictive on what we accepted. We took the approach to htmlspecialchars() every single GET/POST variable even before processing them. After that, when validating the inputs, we took the approach to white list what we accepted instead of using less-safe blacklists.
  3. We used OSSEC to parse our custom logs and generate alerts/block the offending ip addresses automatically.

In another post I will cover these precautions in more detail.

Users and their curiosity

As soon as I released our first tool to check small URLs, I noticed that on the logs:

2009-07-02 08:15:38 ERROR: Check URL: Invalid user provided site: ‘‘.
2009-06-16 08:15:39 a.b.c.12 ERROR: Check URL: Invalid user provided site: ‘
2009-06-16 08:15:42 a.b.c.202 ERROR: Check URL: Invalid user provided site: ‘‘.
2009-06-16 08:15:43 a.b.c.12 ERROR: Check URL: Invalid user provided site: ‘
2009-06-16 08:15:48 a.b.c.202 ERROR: Check URL: Invalid user provided site: ‘http://
2009-06-16 08:15:49 a.b.c.12 ERROR: Check URL: Invalid user provided site: ‘http://

Yes, a valid user, coming from my Twitter post, decided to check if we allowed raw HTML to be displayed. You see that they started with just followed by multiple combinations to test how we was parsing the data.

A little after, a user checks their URL properly, visits some of our documents and try:

2009-06-16 11:17:25 ERROR: Check URL: Invalid user provided site: ‘><">
2009-06-16 11:17:52 ERROR: Check URL: Invalid user provided site: ‘bar’.
2009-06-16 11:18:04 ERROR: Check URL: Invalid user provided site: ‘bar">,asdf’.
2009-06-16 11:18:54 ERROR: Check URL: Invalid user provided site: ‘">

2009-06-16 11:18:58 ERROR: Check URL: Invalid user provided site: ‘">

2009-06-16 11:19:14 ERROR: Check URL: Invalid user provided site: ‘">

2009-06-16 11:20:24 ERROR: Check URL: Invalid user provided site: ‘">

2009-06-16 11:21:26 ERROR: Check URL: Invalid user provided site: ‘">

2009-06-16 11:22:35 ERROR: Check URL: Invalid user provided site: ‘">

You can see by the distance in time that he was thinking on ways to by pass our filters…

Another very interesting attack was against our text-browser tool. We allowed the user to choose Lynx, Links or W3m to test if their web site was showing up properly on these tools. A few days after the release, one user tried to bypass our filtering to see our own php files:

2009-06-01 11:50:18 ERROR: Text-browser: Invalid user provided site: ‘file://index.php’.
2009-06-01 11:51:03 ERROR: Text-browser: Invalid user provided site: ‘file://../index.php’.

You see, very simple target attack from someone that knows how Lynx works and that it will treat file:// to attempt to read local files. That’s why in our code, we checked for http:// and if it wasn’t present, we added.

The last one for today was against our ossec log testing tool, where some people tried to trick bash to see if they could execute other commands on the system. These are some of the attacks they tried:

From: a.b.c.26
; ls -lh

From: x.174.169.117
ss; d
ss; d’^M
-v^M && ls ./

Nothing really big you might say. That’s because we blocked the attempts and generated the error to the user. If we were vulnerable, I am 100% sure that they would try to hack us in some way.. Not for malicious reasons, but for fun (I hope).

To conclude, make sure you are watching your logs and monitoring what your users are doing! If your application doesn’t have proper logging, make sure to add asap to it.

Twitter is down, productivity is up

Twitter has been down for more than one hour today and I suddenly noticed an increased productivity from my peers… any correlation?

Maybe that’s related to the latest “worm”, where thousands of users were posting “Today was so exciting! Made $124 in 20 minutes! if ur interested, ..”? Maybe not?

What we know is that Twitter is very unstable and have a big lack in security… Maybe they just got hacked?

When it is back, you can search for: to see the effect of this latest attack.

EDIT: I was partially right. Twitter is suffering from a DOS attack:

Cisco leaking private IP addresses via DNS

One of the first things I learned while setting up my DNS servers was to never leak internal IP addresses to the outside world. Well, it seems that Cisco haven’t learned it yet..

$ host has address
$ host has address
$ host has address
$ host is an alias for has address
$ host has address

And there is more…,,, etc… How I found it out? Well, using our very own Sucuri’s information gathering tool.

If you are ever setting up your own DNS server, remember to use at least 2 servers, one for inside information and one for outside. Don’t make the same mistake that Cisco is doing…