It’s now commonplace to read that security means more than checking off boxes on a compliance checklist.  A robust approach to security includes trying to fill the gaps between the boxes.  I would argue that that argument has mostly been won.

In No Book to Be By, I argued that we should extend that argument to rejecting the creation of a custom “compliance” checklist out of audit findings.  I disparaged just having a project manager implement that remediation as a series of tasks and calling that a “security program.”  A robust approach to security includes being flexible and able to re-prioritize as threats emerge.

Compliance is something security has to live with.  Be it requirements in regulations, contracts or policies or any combination of the three, the Enterprise has an interest in being compliant.   For the security professional, being compliant should always be an objective, but never the objective.  It should be an enabler.  For more on that, see In Defense of Compliance.  Even the discovery of non-compliance can be enabling (which is not to say it is ever something to striver for).

Non-compliance is discovered in two ways: incidents and audits/assessments.

No one involved in information security or compliance for the Enterprise looks forward to the discovery of non-compliance, regardless of how it comes about.   And even though there are fundamental differences between a security incident and the findings of an audit or risk assessment, they both lead to the same regret: wish we had done better.

Let’s look at the differences between the two before looking at how the similarities can teach us something very useful about compliance.

Incidents come in all levels of severity.  This is a point that the information security professional needs to spend a good deal of time stressing.  It’s understandable for members of the workforce to consider anything that does not result in the breach of thousands of credit card numbers to be something not worth reporting.

But for the security professional who is looking for patterns and weaknesses, a “harmless” incident can be very valuable.  So facilitating incident reporting becomes an objective in and of itself.

Incidents are not like audit findings, in that they come as surprises and determining what an appropriate response is must happen quickly.  If they do involve the breach of sensitive information, then the surprise is jarring and the response must be significant.

In contrast, learning about findings that result from an audit or a risk assessment cannot be a complete surprise.  The auditors/assessors are looking for areas of non-compliance.

The presentation of those findings is also usually a controlled event, via a letter or report and a scheduled meeting to present them.   Responses may need to be immediate, but often times, they require retracing some of the same steps the auditor took and then designing an appropriate remediation strategy.

It is in the space of time between the Enterprise’s belief in its own compliance and the discovery that it is not compliant, that we find a gray area.   There’s an opportunity to address not the letter of the compliance requirement, but the intent.

Intention may seem like an odd characteristic to ascribe to compliance requirements.  But it’s not.  Consider that when a regulation requires technical, physical and administrative safeguards to protect the confidentiality, integrity and availability of sensitive data, the objective is effective protection, not passing audits.

When you take the first step beyond the compliance checklist, you are trying to define the controls the checklist doesn’t specify (one of the ones I point out regularly is that HIPAA doesn’t say you need to patch servers—but of course you should).

The second step, however, is not to find the controls that are not listed on your checklist, but to ensure the effectiveness of all the controls in your environment.  This is the grey area.  This is the hardest sell in a regulated environment.  Because measuring effectiveness means going beyond the usual tautologies of creating evidence.

For example, producing a report showing that all the machines that have anti-virus are running the latest version will check a box.   You can usually satisfy an audit requirement with that report.  But determining if that report shows all the machines on your network can be a great deal more work.

Getting your logs fed into a SIEM can demonstrate you are monitoring your environment.  But dumping raw logs from a server and reconciling them with the logs in your SIEM is the best way to know you don’t have blind spots in that monitoring.  Again, that’s a lot of work.

Passing audits is a good thing.  For the security professional, being compliant should always be an objective.   However, it is important to not let demonstrating compliance be the final objective of being compliant.

One way to do that is to ensure that essential controls that may not be on the “compliance checklist” are implemented.  But the other way is to ensure that the controls that are on the list are not just producing evidence for auditors, but are doing the job they were implemented to do.

Leave a Reply