It’s the holiday season, which means that many of us are already in the process (or soon will be) of putting up holiday decorations. Ordinarily that wouldn’t be particularly noteworthy — or applicable to InfoSec for that matter — but this season there’s a bit of a sea change underway that carries with it an interesting metaphor for something going on in security.
Specifically, have you noticed the popularity of LED holiday lights? Ask around: LED is all the rage. The interesting thing about this is that LED lights tend to have a higher (sometimes significantly) purchase price than traditional incandescent lights. However, aesthetically they tend to look and operate almost identically. So it costs less to buy the incandescent lights, but because the energy required for LED’s is so much less by comparison, the long term most economic decision is probably LED’s. Knowing what is the “right” decision economically means accounting for total cost of ownership vs. just the cost to acquire.
What does that have to do with security, you ask? Well, consider for a moment the increasing popularity of Docker. Much like LED lights, Docker is on fire right now from an adoption standpoint– because of portability and efficiency advantages folks in the datacenter and developer communities are gravitating to it in droves.
But how do the economics of containerization compare to traditional server virtualization? Is it like incandescent lights in that up-front costs are low but long-term costs (for example extra security or maintenance overhead) are higher? Or is it more like LED lights in that it’s the one that gives the best return over the long haul? Answering that question is important because: a) like it or not, Docker is something security practitioners are going to need to consider in the not-to-distant future and b) the answer to this question is more complicated than it initially seems.
What is Docker?
Docker is open source software that allows the deployment of “software containers.” A software container is a portable and (semi-)isolated environment within which applications can run. It includes the application and software/resources required for that application to run. Release 1.0 of Docker happened back in June, which allowed folks on Linux to make use of this functionality. However, given the rapid upswing in popularity, similar Windows functionality has already been announced.
Like traditional OS virtualization, software containers allow administrators to create packages that include the applications they want to deploy but that don’t require their own physical (or virtual for that matter) hardware upon which to operate.
Unlike OS virtualization though, it leverages the segmentation features of the underlying OS to create that environment. So instead of every application requiring a whole OS “stack” to run, many containers can share the same underlying OS instance. This makes containers lighter-weight than a full OS install.
Advantages of this are many: enhanced portability (since the whole underlying OS isn’t required to accompany the app) and also efficiency since image size is significantly less without multiple redundant copies of the OS and core services.
From a security and management standpoint, this introduces some additional complexity. First and foremost, the Docker software itself (like all software) can have vulnerabilities. Additionally, when OS virtualization was new many organizations were challenged with management of the virtual environment. Specifically, they were challenged with issues like VM Sprawl, backplane communication (e.g. it’s impact on tools like IDS), out of date or “stale” virtual images, discovery of new images, keeping a robust inventory, etc. etc.
In fact, many organizations have implemented automated processes and tools to help with management of the virtual datacenter. For example it is used to detect out of date or stale images, to help mitigate issues like sprawl, or to keep track of images should they be transferred to another hypervisor.
Containers can pose similar challenges at scale. However, unlike OS virtualization where organizations have had years to work on getting tools and processes in place to mitigate these issues, software containers can allow these same things to happen again, but “under the radar.” Plus bear in mind that isolation might be somewhat more porous compared to OS virtualization, so care should be put in to where, how, and for what they’re leveraged.
Making the Analysis
As you might imagine, the specifics of these things are going to vary depending on your organization. That probably seems like a bit of a cop out, but it’s the truth. For example, does your organization have the sprawl situation contained in your virtual datacenter? Or is yours the kind of shop where new instances seem to suddenly and inexplicably appear on a near-constant basis?
If you’re in the former camp, chances are probably good that keeping track of containers won’t be that much of a stretch. Sure, you’ll need to probably adapt processes (or tools if you have them) to account for containers as well as virtual images, but making that leap probably won’t be rocket science.
However, if you’re in the latter camp, maybe you want to consider how likely those folks are to be successful when you add containers to the mix — and also what the impact of the additional overhead might be to them if they’re already underwater.
Likewise, do you have folks who understand the Docker software, are alert for security issues, and know what/how to take action if there’s a vulnerability? Or is your admin team a “fire and forget” kind of crowd that is slow to take action and might leave something unpatched until you nag?
If you’ve got a “crack team” of folks who’re “on it,” maybe the additional overhead of keeping track of one new piece of software is a cakewalk. However, if you’re team is slow to patch (or miss key fixes), you may want to account for this as you analyze the impact of Docker from a security point of view.
The point is, the question I asked you at the beginning (about whether Docker was more like the LED or incandescent lights) was really a trick question. For some, it’ll be like the LED’s. Using it will let you add portability, increase efficiency, and let you allocate more applications per physical devices in the datacenter. For others, it’ll be like the incandescent lights. Sapping management overhead, increasing technical risk, and leading to new avenues for old problems.
Either way though, the point is that security practitioners should start thinking through these questions now because they’ll want to evaluate the impact to their own organization in order to take the right steps.
Ed Moyle is Director of Emerging Business and Technology for ISACA. Prior to joining ISACA, Ed was a founding partner of the analyst firm Security Curve. In his more than 15 years in information security, Ed has held numerous practitioner and analyst positions including senior manager with CTG’s global security practice, vice president and information security officer for Merrill Lynch Investment Managers, and senior security analyst with Trintech. Ed is co-author of Cryptographic Libraries for Developers and a frequent contributor to the Information Security industry as author, public speaker, and analyst.