(This is the sixth installment in an on-going examination of the first principles of data privacy and security. The first installment can be read here. The second installment can be read here. The third installment can be read here. The fourth installment can be read here. The fifth installment can be read here. These principles, often represented in regulations and privacy practices, form the foundation for how an organization should treat the customer data they collect.)
It is strange, till you think about it, that confidentiality is not the same as privacy. Also, when it comes to data, integrity is more akin to the structural kind that an engineer measures than to being morally upstanding.
Security is the fourth principle of fair data practice principles. It generally includes the concepts of confidentiality, integrity and availability. In many ways, it is distinct from the other principles. Consider that “notice,” the first principle, is a discrete transaction. Even if it is provided annually, notice is simply that: telling someone what you do with their data. How often you do that is a technicality.
The second principle, “consent” or “choice,” is also a discrete transaction. The subject of the data has consented or they have not. The individual can change their mind, but then recording that is a second, discrete transaction.
The third principle, “participation,” just like the first two, represents transactions between the data collector/user and the subject of the data. There may be a number of transactions that take place as part of a subject’s participation, but they are almost always structured communications: access and correction.
And all these transactions – notice, choice, and participation – are generally documented for the sake of demonstrating compliance.
The adequacy of notice, choice and participation is relatively easy to evaluate. You provided it or you didn’t. You routinely document it to prove you provided it or you don’t. The notice might have been provided in type that is too small or in a language that the subject did not understand. Consent may have been solicited via a signed form or is just a verbal consent when the consumer replies to the question “yalrightwithat?” But the criteria for evaluating the adequacy of these transactions still focuses on the transaction themselves.
I am over-simplifying a little and I do not mean to imply that these transactions are less important than security. But security is different.
Security is not a single transaction. In fact, when it is done successfully, the subjects and users of the data should not even be aware of most security controls. Providing for the security of the data is a program or a framework or a great big collage of controls. Regardless, the last thing security is, is a single thing that can be accounted for by checking a box or filling out a form.
Security means that the data are protected. They cannot be accessed, changed or deleted except by those authorized to do so. The authorization process itself must meet criteria to provide assurance that the data have not been compromised in any way. And, except in rare circumstances, the data are available to be accessed, changed or deleted as appropriate. Finally, security means that there are controls in place to ensure these things. This is meant to be a data-centric view of security not a definition of all security efforts or frameworks. There are many definitions of security.
Security is a fair data practice principle because the subjects of the data and those who collect, disclose, and use the data and those who govern the use of the data all have expectations about the effort to ensure the confidentiality, integrity and availability of the data. In addition, they all expect that the rights and responsibilities described in the notice, agreed to in the consent, and represented by participation, are kept intact by controls that more or less “guarantee” the data flow is what they thinks it is.
As a fair data practice principle, security begs a question. The question is “what is good/reasonable/enough security?” Whenever data are breached the first question asked is “how did this happen?” But the second question, following fast on the first is “did the entity that was breached provide enough security?”
As I will discuss below, it is a very difficult question to answer. One side of the answer is “whatever those who govern security say is enough, is enough” and that answer will be discussed when we turn to “enforcement.” In this article, however, I want to focus on the non-regulatory ways to evaluate the adequacy of security. And in the next, I will focus on the characteristics of a security program itself. A discussion of enforcement will follow.
To evaluate whether or not an entity provides enough security, we have to start by looking at what motivates them to set up a security program in the first place. What drives an organization to develop their data security program in a certain way depends on a number of overlapping factors. The most common are these:
- Regulation— if failure to implement certain security controls risks sanctions, then not having those controls is almost always a business risk an organization should not be taking. If the regulations are simply guidelines without enforcement, then the factors below will gain in importance
- Economics—security is a form of loss control. Not just loss of data, but financial and reputational loss as well. Every year, the Ponemon Institute’s annual study spells this out in great detail. From the 2013 study:
“German and US companies had the most costly data breaches ($199 and $188 per record, respectively). These countries also experienced the highest total cost (US at $5.4 million and Germany at $4.8 million). The least costly breaches occurred in Brazil and India ($58 and $42, respectively). In Brazil total cost was $1.3 million and in India it was $1.1 million.”
- Customer expectation—more and more entities are realizing how important this is. Customers care. Target estimated that the breach they experienced in the middle of the holiday shopping season cost them between 2-6% of sales. In the B2B space, it is also important. Service providers in many industries report the increasing frequency with which contract negotiations with their corporate customers where either security is a contractual obligation and/or shared liability for data breaches is spelled out. This is sometimes tied back to a regulatory driver, such as HIPAA’s requirements around sharing data with Business Associates, but the foundations of shared responsibility for providing security originate in the customer’s expectation that once I give you my data, I hold you accountable for protecting it
- Culture—corporate culture matters in security. If the CEO and the Board demand a mature security program, then the likelihood of there being one is far greater than if security initiatives are the first things chopped in cost cutting exercises. Is security a full time job of one or more individuals in the company or is it a part time responsibility of busy IT staff? Are there security reviews of new products and vendors? There is no one right answer, but these and other procedural and structural characteristics influence how a security program operates in an organization
- Perceived threat/Risk tolerance—this can be subsumed into the ones above, but it really is a unique factor. Some organizations perceive themselves to be more threatened than others. This may be easily justified as in the case of certain critical infrastructure or it may be less commonly understood. I remember going to a conference and hearing one particularly articulate and successful Chief Information Security Officer describe his role at his firm: “you may care about patient information or credit card numbers, but In my world it is all about guarding the negotiated price for bauxite”. You can correlate the strength of an organization’s security program with the organization’s perceived vulnerability to existing threats and its risk tolerance. This is often most evident in the physical security they have implemented. Risk tolerance is a tricky thing to evaluate. Of course, no one will say they are keen to take risks with data so you need to determine risk tolerance through indirect means, e.g. size of security budget, strength of controls, openness to audit, etc.
With these 5 drivers in mind, let’s briefly look at the ways we can evaluate a security program. In general, there are four criteria:
- Certification
- Adherence to published standards
- Risk based
- Cost-benefit
When does an organization have “enough” security? Answering might depend on how you weigh each of these four criteria and how well they stack up against those criteria. By themselves, none of these criteria are sufficient to definitively evaluate a security program.
- Certification—The PCI DSS is the “Payment Card Industry Data Security Standard.” It’s most stringent level requires scores of controls to be in place and evaluated. Organizations that handle and store credit card data become PCI Certified by being evaluated by qualified assessors who are themselves certified to perform these evaluations. It sounds like it must result in the best security. Both Heartland Payment Systems and Target (together responsible for over 100 million credit card numbers being breached) were “PCI Certified” (I am picking on PCI, but there are others, for example HITRUST in the Healthcare industry).
Given the large credit card data breaches, you have to ask yourself if certification is worth much. Increasingly, the answer is that just like having a license to drive does not make you a good driver, so having a certification alone does not make you truly secure. But just like the driver’s license, the fact that you have to renew your certification periodically helps ensure that some level of security is maintained by the organization. In other words, certification, when applicable, might be necessary and desirable but it is by no means sufficient.
- Adherence to published standards—what if I want to design my security according to standards like those whose number/names are prefaced by NIST (the National Institute of Standards and Technology) or ISO (the International Organization for Standardization)? If I adhere to them, does that mean I have “enough” security?
Unless required by contract or law, standards are most useful in giving a security program credibility. They do this because they ensure that there is an externally validated specification for designing a given control or program. That might not define “enough” security, but, like certification it shows more than an ad-hoc effort at building a security program.
The biggest challenge with standards is determining whether keeping up with them as they are revised is essential. For example, NIST’s ideas about risk management (the 800-30 publication) have evolved over time. Do I need to revise my risk management methodology accordingly? That depends on whether you are using the standards or the body that defines them as the source of the authority to define your security objectives. If the latter, then you have to plan to keep up with the definitions as they change. If the former, you need a good reason why a static standard is good enough.
- Risk based—unlike the externally driven criteria above, the risk-based approach to evaluating a security program requires a lot of introspection. Since we’re talking about security as it relates to data, then incorporating a risk based approach into the design of the security program is essential. You need data security where you have the data. You need security controls where there is a risk that the data can be compromised. You may not need as much security in places from which the data cannot be reached.
A risk-based approach might have helped prevent the Target breach. If it is true that an HVAC vendor’s credentials to Target’s network were used to access the credit card data that was stolen, the question is why was the network so flat that access to heating and air conditioning equipment allowed access to credit card data storage? A more aggressive evaluation of risk and control design would have led to segmenting the network such that getting into the HVAC system did not expose the credit card data.
Network segmentation aside, risk based criteria for designing security programs offer the advantage that they focus on your risk, your data and your controls. The drawback to this approach is that knowing about a risk and doing something about it are two different things. It is important not to fall into the trap of reporting risks to senior management, being studiously accurate, and ending up having that report be a de facto acceptance of risk. If risks are identified using a risk based approach, and if they are not addressed, then accepting those risks needs to be explicit.
- Cost-benefit—security professionals don’t always like this approach. But it is a legitimate business strategy to say “we have all the security we can afford and we have optimized it for maximum protection without spending more than we wanted to.” In reality, this is implicitly true at every organization. They spend what they spend (that’s the cost) and they have the security program they have (that’s the benefit). So looking at the security program this way just makes that logic explicit rather than implicit.
The challenge with this approach is that while the costs of a security program are relatively easy to quantify, the benefits are somewhat harder. This is true with any loss control program in an organization. Security professionals don’t always like this approach because they feel that the more time you spend demonstrating ROI, the less time you spend actually protecting the data.
So how do you determine when an organization has “enough” security?
You don’t. If you look at the motivations and criteria above, you realize that the question “do they have enough security” cannot be answered. The descriptions and measurements of security programs cannot provide 100% guarantees against breaches.
You can test and measure the strength of a given control. You can evaluate the program against standards, apply for certifications and evaluate risks till you have a comprehensive list of everything that could go wrong. But enough is something that can only be evaluated in hindsight. We did not have a breach yesterday so yesterday we had enough security.
There is a question that can be answered however: “what has motivated you to design the program the way you have and what criteria have you used to design it?” Because while an organization cannot always answer the question of whether or not they have enough security, they can and should always take a stand on whether or not they’ve done enough to secure the data.