We don’t really think about it this way most of the time, but disruption can happen in security just like any other activity in an enterprise.  By this I mean, changes in the way that business areas use technology, changes in the “technology substrate” itself (i.e. those technologies like shared services and infrastructure that enable business areas indirectly), and new advances can happen that impact the security controls we have in place.

From a pragmatic point of view, what this means is that existing security controls can decrease in utility over time as the context within which they operate changes.

Don’t believe me?  Consider virtualization and the impact that it had on IDS systems  – at least during the initial phases of adoption.  As network traffic shifted to “backplane communication” instead physical network connectivity, traditional IDS systems were able to see less of the traffic.  Meaning, although the IDS was unchanged as a countermeasure, the utility of it declined because of the shift to virtualization.

There are other more recent examples too: for example containerization technologies like Docker and Rocket can complicate system monitoring control while the proliferation of mobile technologies can complicate endpoint management controls.

The point is, very often new technologies or use cases will come along that can undermine countermeasures that we have in place. Some will erode the value of what we have now, some will require new countermeasures, and others will require adjustments in how the operate to stay relevant.

All this creates a very real challenge for security pros. Specifically, how do we ensure that the particular tools and technologies that we’re using continue to provide as much positive value back to the organization as possible when changes occur?

This isn’t easy – after all, there’s no way for us to know ahead of time what specific new technologies will arise or what impacts they’ll have and very often changes can happen seemingly “overnight.”  Meaning, we have very little time to respond and fully evaluate the impact when these changes come down the road.

Fortunately though, we can get a leg up in managing this.  By employing disciplined governance strategies (such as those advocated by structured frameworks like COBIT and others), when applied with focus, we can put ourselves in the best position to be able to respond to these changes and make the optimal decisions for our organizations: when to invest in new controls, and when to start de-investing in others.

Specifically, by understanding the risk mitigation function of the countermeasures we have in place – and by understanding the efficiency of their operation – we can put together the “raw materials” that will enable us to very quickly react and adapt when disruptive technologies arise.

What function do existing controls perform?

To prepare for disruptive technologies that might impact us, the first thing we need to know is what role the countermeasures that we have currently play in our ecosystem.  Note that I don’t mean how they operate technically here; instead, I’m referring to the role they have in mitigating specific risk(s).  For example, why we deployed them in the first place.

Now, it’d be great if every organization had a fully documented and detailed risk analysis that supported every security control they have fielded – but as we all know, very seldom is this the case.  If your organization does utilize a formalized risk management methodology, bully for you. This data is one half of what you’d ultimately need to get to in order to systematically understand disruptive events.

If you don’t though, now might be the time to consider implementing a more structured program.  If you can’t though, you can still get much of the value from putting together a simple controls inventory. List out what countermeasures you have in place and the corresponding issues that they mitigate.

Likewise, as you put new controls in place, record them into that inventory.  Pay attention to  “special cases” and “one off” usage as you do this – these are the items that are most likely to fall through the cracks when evaluating impact of a new or disruptive technology.

How are they performing?

The other piece of information that you need is information about the cost (relative to performance) of the controls you have.  A hallmark of structured governance is a focus on ensuring that the business as a whole – and individual stakeholders – are getting the most value as possible for investments.

Having an ability to measure and track this is obviously paramount from a governance standpoint generally.  Specifically, how efficient are those controls? What do you spend on them relative to their performance?  How much staff time is involved in maintaining them and other administrative overhead? What other costs do you incur as a result of operating them?

Note that putting this information together is a more comprehensive exercise than just looking at the line-item costs of what you pay to particular vendors.  You want to account also for soft costs as outlined above, but you also want to keep in mind depreciation of investments that you’ve made already.  Meaning, if you just bought a shiny new gadget, replacing it 6 months after you bought it is probably not a strategic use of resources.

Keep in mind that you’ll need to potentially adjust budgetary priorities when new technologies do come down the pike.  The information about what risks individual controls mitigate will influence this: as certain controls become less useful, the cost/value equation will shift which can alert you to areas where you might want to decrease spending. The reverse could also be true – you may want to potentially increase investment in certain controls if they provide greater value in light of a change in the technology landscape.

These two data points, effectiveness of controls at reducing risk and efficiency of controls from a financial and resource utilization standpoint, together will give you the information that you need to evaluate the impact of potentially disruptive technology as you encounter it.

From the inventory of controls (i.e. risks offset), you can evaluate (with the addition of some human analysis) what countermeasures are impacted while the information about resources invested to keep them operational will help you understand what you may be able to scale back on, ramp up, or reinvest in alternative controls.

Ed Moyle is Director of Emerging Business and Technology for ISACA.  Prior to joining ISACA, Ed was a founding partner of the analyst firm Security Curve.  In his more than 15 years in information security, Ed has held numerous practitioner and analyst positions including senior manager with CTG’s global security practice, vice president and information security officer for Merrill Lynch Investment Managers, and senior security analyst with Trintech.  Ed is co-author of Cryptographic Libraries for Developers and a frequent contributor to the Information Security industry as author, public speaker, and analyst.  

Leave a Reply