One of the aphorisms used in security is, “We have to be right all of the time, and the bad guys only have to be right once.”  While this may or may not be 100 percent accurate, it is a pretty good policy to go by when you’re building a security program.

With over 325,000 Internet routable IP addresses, Columbia has a very large security footprint.  We see many millions of attacks per day, and under this onslaught, our security system must weed out the really compromised systems from the ones that are just a little broken.

We are doing this using an in-house written system that uses Bayesian analysis to look at network behavior (netflow only, not content).  Going back to being right all of the time, I think that everyone knows that this is not possible or practical.  So, the question comes down to how you’re going to err, and what are your mitigating controls.

When we started building our system, the design called for a fully automated process – find the compromised system and take it off the network.  While this would work for client systems, servers required human intervention (the first time you shut down a piece of medical equipment that is sending out spam, you are going to have a bad day J).

Our PaIRS (Point of Contact and Incident Response System) removes client systems from the network, and informs the support teams for managed systems.  We use a logarithmic function to determine when a system is compromised – the knee of the function is set to 10.

From many years of experience, we have found that when using this algorithm, the number of false positives is very low.  It is not zero, and most of the FPs are from machines being used for experimentation (we have PlanetLab machines on our campus, which use network protocols in very nonstandard ways).  I am OK with this – if we were to try and make our FP count zero, we would then start to see false negatives, which in my opinion, are much worse.

I have found that, security is 10 percent technology and 90 percent people management, and the longer I am in this business, the more convinced I am that this is accurate.

While I completely agree that a false positive is really annoying to the owner of the accused machine, a false negative can adversely affect your entire business – just ask the people at Target.  As the owner of this process, I have become very good at “making nice” and explaining why we shut down a perfectly good system that was doing squirrely things.

False positives can require the eating of a little crow, false negatives can result in a résumé generating event.  I like my crow with a little hot sauce.

Leave a Reply