Some data have always been big. Credit bureau’s have been amassing large databases on consumers for decades. The Federal agency that runs Medicare has massive databases of claims data on the medical care delivered to the elderly. Until recently, these large collections of data were generally limited to large, well known entities such as these. But more and more, large datasets are being accumulated by many types of organizations and for purposes that range from detecting terrorists to zeroing in on consumers who routinely buy clothes, wear them once, and return them.
When asked about retailers who keep “secret databases” of consumer behavior so they can detect return fraud, Ed Mierzwinski, consumer program director at the U.S. Public Interest Research Group (USPIRG) said: “There should be no secret databases. That’s a basic rule of privacy practices. Consumers should know that information is being collected about them.”
What struck me about Mierzwinski’s reaction to the secret databases was not that the USPIRG would have strong opinions about privacy, but rather his assumption that people are aware that there are “basic rules” of privacy practices.
In fact, for both security and privacy there are basic rules, first principles, that I argue go beyond the more general category of “industry best practices.” These first principles I maintain are the foundation of any security framework and the means by which we, as IT executives and practitioners, should judge any privacy program.
Let me just lay them out and notice a few of their, if not origins, more illustrative mentions.
The U.S. Federal Trade Commission (the FTC) published a useful listing of these principles in a short document entitled “Fair Information Practice Principles.” The U.S. Department of Health & Human Services Office of the National Coordinator published a remarkably similar document for Health Information Technology (those of us in the Health Care Industry simply call it the “ONC”). In March of 2012, the ONC published the “Privacy and Security Framework Requirements and Guidance for the State Health Information Exchange Cooperative Agreement Program” (ONC-HIE-PIN-003). Where the FTC published five principles, the ONC published eight, but they are nonetheless aligned.
In the next series of articles, I will look in more depth at each of the principles. They are:
- Notice/Awareness (FTC), Openness and Transparency (ONC), Collection, Use and Disclosure limitation (ONC)
- This contains Mierzwinski’s basic rule above: “Consumers should know that information is being collected about them.”
- Choice/Consent (FTC), Individual Choice (ONC)
a. Somebody should be asking somebody’s permission, right?
- Access/Participation (FTC), Individual Access (ONC), Correction (ONC)
- This one goes all the way back to the 70’s and the Fair Credit Reporting Act. Regardless of the permissions you give, you should be able to know what is “on file” about you and have a means to correct mistakes.
- Integrity/Security (FTC), Data Quality & Integrity (ONC), Safeguard (ONC)
- Outside the world of security professionals, cyber/information security is part of the establishment of trust between data collectors and data subjects. Consider what the Australian Government wrote about its own data protection regulations: “These legislative instruments are designed to maintain public confidence in the government as an effective and secure repository and steward of citizen information.” (Big Data Strategy—Issues Paper. Australian Government, Department of Finance and Deregulation). This principle will take more than one article to flesh out since it has its own first principles (hint: one of the two is C-I-A).
- Enforcement/Redress; Self-Regulation; Private Remedies; Government Enforcement (FTC), Accountability (ONC)
- Another side of trust is the idea that there are consequences for those who betray it. It’s easy enough to make the case that thieves should be punished, but there are many more grey areas to look at here. Information security professionals working within an organization usually are focused on self-regulation, or at least enforcing compliance internally.
There are other ways to look at the security of these large datasets, of course. There are more heavily technical views, where the privacy side of the operation is a stakeholder, but protecting the infrastructure is “the mission.” In this view, security is a purely defensive exercise, one in which the infrastructure needs protecting from threats and what the data are collected for is unimportant. Protecting the infrastructure is necessary but not sufficient. There also is a way to look at security as loss control and that is very valuable as well. Loss control is important in terms of demonstrating the value of a security organization to executives and in prioritizing controls according to what risks they mitigate.
But the infrastructure has become so complex, that just protecting it requires that you see it as multi-dimensional (it’s not news that the days of everyone sitting at dumb terminals within your four walls are history). And loss control as a motive for control design without any other context is also too limiting for any dynamic organization.
The context required to design controls effectively and to evaluate risks at the enterprise level must combine the principles of privacy and security. These principles I’ve listed above let us look at not just the “what” and “how” of protecting data and infrastructure, but the human side of “why.” I’ll examine more in my next installment.