Seeing Red

Last week, for the second week in a row, IIS administrators have had to face Code Red. More than a simple virus, Code Red could represent a new acceleration in the online virus war and shows that we may not be ready, as an industry, for the era of web services.

A Rapid Epidemic

Now that I’ve got your attention, let’s take a quick look at how Code Red spread. First of all, there was a simple buffer overflow problem in Microsoft Index Server, for which the company produced a patch. A month later, Code Red starting showing up. However, its rate of growth was relatively slow at the beginning. The true epidemic did not start until July 19th, when Code Red exploded onto the scene, increasing the number of infected servers from just around 300 at 00:15am to 2994 by 7:30am, over 30,000 by 14:40pm and over 300,000 in the 6 hours after that. In other words, in less than a day, Code Red went from a relatively small annoyance to a full blown attack on the net infrastructure. Had no one rung the bell on it, it would have taken only a couple of days for it to infest every single version of Microsoft IIS (or about a quarter of all web sites on the net).

Who’s responsible?

While the hunt is on for the person who devised this virus, the list of people who have some level of responsibility in the spread of this virus is a very scary one: Microsoft, of course, for first putting out a faulty product, a web server with a big security hole. However, credit goes to Microsoft for putting out a patch before they even knew of the worms’ existence.

Another group which deserves some blame is IIS system administrator of infected systems. Let’s face it, we all know that Microsoft software is riddled with holes. We also know that Microsoft puts out patches on a regular basis. We all know that those patches solve most of the problems before they occur. Well, the people who were infected by Code Red did not follow the basic rule of patching systems early and often.

Many Unix administrators are laughing right now at IIS users: they shouldn’t!

The security on some Linux systems is so dismal that a virus similar to code red but aimed at Apache servers could have done much more damage much more quickly.

What have we learned?

The first thing that we have learned out of the code red offensive is that we can’t rely on people to update their systems properly. When system administrators fail at that task, they contribute to a lower level of security for the net as a whole. But SAs are human and to err is human. The truly scary thing is that Code Red is only the first of a series of viruses that will gain preeminence on the net.

The truly scary thing is when an application used by general consumers gets attacked by a similar worm.

Earlier this year, I covered a set of security problems in AOL’s Instant Messenger. One of the ways hackers are taking over AIM is using buffer overflow, throwing large strings of apparently nonsensical characters to the client in order to take it over. Unfortunately, Code Red acted the same way with IIS servers. What will happen when someone uses the Code Red approach to create an attack on AIM? The thought sends chills down my spine as I can see hundreds of millions of computers acting as zombies in a possible net-wide denial of service attack.


During the August 1st attack, webmasters noticed not one but two (and possibly three) different types of code red attacks. The first one was the same as the July 20th worm but the second was much more nefarious, a worm that did not announce itself (it did not deface web sites) but instead added a backdoor allowing hackers access to the compromised servers. When work done on one virus reappears in another, we call it a mutation. What is truly worrisome is that Code Red was not a very sophisticated virus. Others, like Hybris can update themselves.

What happens when Code Red is merged with one of those other viruses? This is yet another scary question that I send out for discussion.


For the past year, I’ve been paying more attention to the security space. I wasn’t sure of why but I felt that this was an area I needed to pay attention to. More and more of our technological infrastructures are moving to the Internet. These days, telephone companies, cable companies and many others are tying into the grid. However, it seems to me that little attention is being given to security. As we move into the era of web services, there needs to be an important dialogue in our industry as to how we increase the security and reliability of the Internet.

Furthermore, as new operating systems come out, they should be thoroughly proofed for security holes. Apple recently released OSX and already, a number of holes are being noticed. Microsoft is still set on releasing Windows XP within the next few months but few in the security community have had a chance to test out its security. With all that said and done, I would also like to encourage all of you to question whether any of your connections are secure. For example, if you are running a DSL or cable line at home, have you firewalled your environment? These days, it’s little things like that that make a lot of difference.

Previous Post
Ogg Vorbis: MP3 Contender?
Next Post
The Day After
%d bloggers like this: