by Mark Rasch
Google recently disclosed the fact that a vulnerability in its Google Plus configuration could have been used by hackers to expose personal information about users of the Google Plus service. (https://www.nytimes.com/2018/10/08/technology/google-plus-security-disclosure.html)
Indeed, Google announced that it was shutting down the service as a result of the hack. That’s not what outraged the Interwebs. No. What was distressing was the fact that Google discovered (and fixed) the vulnerability in March of 2018, and only disclosed it in October — seven months later. If only I had known about the vulnerability in March! I would have… well… I would have… done absolutely nothing seven months earlier.
In fact, Google’s privacy officer noted in a blog post (https://www.blog.google/technology/safety-security/project-strobe/) that “We found no evidence that any developer was aware of this bug, or abusing the API, and we found no evidence that any Profile data was misused.” So it was a vulnerability that, if exploited, might permit the disclosure of personal information, but no information was actually disclosed. Well, as far as Google knows.
And that fact points out one of the significant problems with how we structure privacy and privacy law, and breach disclosure law.
Mandatory Disclosure
At this point, every state in the U.S. and many federal agencies have breach disclosure laws. For healthcare data, we have “incident” disclosure laws, and the EU under GDPR similarly requires government agencies to be notified about an incident involving privacy related data within 72 hours.
But why? Why do you need to know? Why should a company tell you? Or more importantly, what are you going to do about it?
I recently saw an ad for a service that would let me know if my social security number was on the Deep Dark Web. For a nominal fee I could find out if hackers had stolen my ID. O.K., so let’s say the service tells me that it couldn’t find my SSN on the small portion of the DDW that it scans. Do I change my behavior? Not really. And if they DO find my SSN on some website? Am I going to enter the privacy related witness protection program and operate under the assumed name “Gene” at a Cinnabon in Omaha?
SB 1386 Model
The bulk of breach disclosure laws is based on the model of California’s Senate Bill 1386, passed in 2002 (and effective the following year). The bill was originally marked up as an identity theft statute, but then there was a data breach of the California civil service retirement system (of which members of the California legislature are a part), and a non-disclosure of the breach for several months, during which legislators’ (and mortal humans’) data were being used. This included not only their names and addresses and social security numbers, but also things like account numbers, passwords and PINs. So for the months between the breach and the disclosure, the hackers had the ability to take money out of the accounts without the customers’ knowledge.
So the concern with SB 1386 lay not with protecting privacy, but with preventing fraud. Indeed, the goal of disclosure was to permit the data subject to take remedial efforts (e.g., examining their credit card or bank statement, closing an account, changing a password or PIN) to prevent further harm after a disclosure. If you were notified that your account might have been hacked, then you would be encouraged to look over your next monthly statement to see if there were fraudulent charges and to report/reverse them. Notification enlisted the data subject in an effort to mitigate harm resulting from a data breach.
You can see that in the way data breach laws are structured. The kinds of data that are considered “personal data” that require a breach disclosure (at least in the U.S.) are things like account number PLUS password. Name and address alone are not. It’s not about privacy. It’s about fraud.
It Ain’t 2003 No More
Increasingly, breach disclosure laws, and the more onerous “incident” disclosure laws, require either regulators or data subjects to be notified about an unauthorized access to their data within a certain — often unreasonably short — time frame after the incident or after the discovery of the incident. These laws are justified under the rationale that consumers have a “right to know” about the fact that their personal data have been accessed, used or stolen.
Why?
In the 2003 time frame, the consumer could do something — not a lot, but something — about a breach. They could cancel a credit card or examine a credit card statement. Now, most breaches are detected not through some careful forensic analysis. Rather, it is because someone sees the purloined personal data on the dark web, or someone uses the credit card information (or healthcare financial information) fraudulently, and the fraud points to a common origin — the breached entity. Retailers don’t call banks and tell them that they have been breached. Banks and credit card brands call retailers and tell them that they have been breached. There’s not a lot a consumer can do post-breach. So banks and retailers cancel credit cards, reissue cards or PINs, put consumers of free credit watchlists, and even freeze consumers’ credit applications. But the consumer can’t do much on their own other than repeating these things. And for truly “personal” data – like the results of medical tests, or a porn site browser history, or the involuntary outing of one’s sexual orientation because of a breach, there’s no way to put that toothpaste back in the tube. The most the consumer can do as a result of a breach notification is to sue the entity responsible for the breach for “privacy” damages. And courts have been chill to these causes of actions in most cases, finding that breach of “privacy” in and of itself — or more accurately fear that personal data will be used to injure someone in the future — is not a current harm sufficient to justify damages. Because we don’t value privacy (in that we don’t put a dollar value on it), we don’t value privacy.
No, we have breach disclosure laws now to punish and deter. Not the criminals. Not the thieves. Not the revenge porn doxxers. Breach disclosure laws serve the purpose of putting a huge price on failure to secure (both response and reputational) in the hope that this will encourage better security practices.
Interestingly, the cost of breach notification and response currently in fact exceeds the cost (albeit narrowly measured) of the breach itself. A breach into a credit card database may result in some of the compromised cards being cloned and used for fraudulent purposes – a measurable economic loss resulting from the failure to secure the card database (and maybe failure to comply with PCI DSS security standards). But the true cost to a retailer or processor from the breach is not the 48-inch TVs bought in Shanghai with the visa card of some guy in Dubuque. It’s investigating the breach forensically, notifying the quarter million people whose credit card numbers were stolen about the breach, sending the “Dear Valued Customer…” letters. Notifying regulators. Paying for new credit cards and credit monitoring for all customers. The inevitable FTC enforcement action. The VISA/MC “fines” (contractual damages, actually). The state Attorney’s General investigations. The class action lawsuits. The dip in stock price. The (usually short-term) decline in sales or reputation. Not the actual fraud or loss resulting from the theft of data. The cost of breach notification and remediation may exceed the actual damages you are trying to remediate by a factor of 10 or 20. Does that make sense?
Maybe. Certainly, if companies fail to account for all of the costs that result from a negligent failure to protect data entrusted to them, they have little incentive to prevent the data breach. By increasing the cost of the breach we put a dollar value on avoiding the breach, and therefore encourage either better prevention or better insurance — whichever is cheaper.
Loss of Focus
This emphasis in many ways draws attention and resources from other security goals – like supply chain security, critical infrastructure security, operational risk, etc. It forces companies to dedicate their resources, time, attention and Board of Directors’ attention not on enterprise risk and security but on one thing — preventing a reportable breach. It focuses attention on things like name and PIN when more damage can occur from an attack on a payroll system or a transportation system. It skews the measurement of risk — which is what it is designed to do – but it changes corporate priorities.
Back to Google – Breach vs. Vulnerability
Which brings us back to the Google + attack. Remember when I asked what a consumer would do with the information that there had been a breach? Well, there wasn’t a breach — at least not in the traditional sense. Google (or Alphabet?) correctly noted that what they discovered was a vulnerability — something that COULD HAVE BEEN exploited, but no evidence that it HAD been exploited. So it’s not a breach, and Google was likely within its right (at least under a narrow interpretation of the law) to not disclose it.
In fact, imagine if every company where your data might reside was required to disclose every VULNERABILITY, which could lead to an exploit, which could lead to a breach, which could lead to an unauthorized acquisition, which could lead to harm or damage to you. Not only would your mailbox be flooded with vulnerability disclosures, but broad dissemination of the nature of these vulnerabilities would, at least in the short term, make both entities and the net itself less secure. If the disclosures are mere pablum — “we have found a vulnerability, which if exploited might have resulted in some disclosure of certain information about you…” they are mostly meaningless. If they are detailed, “we have found a particular SSL vulnerability in a particular port which would allow a user to obtain access to a particular IP address in the following manner…. And we have not yet fixed the vulnerability,” well, that’s just not good. So what you would expect is more pablum to the public, but hopefully detailed vulnerability sharing with ISACs, CERTS, and other fora.
But here’s where Google may have gone wrong — and it’s hard to criticize them because they really may have been following the law if there’s no “breach.” Remember when I asked “what would I do” if I were a Google Plus customer and was notified of the vulnerability? Well the one thing I could do — and might do — is just stop using Google Plus. O.K. I admit it, though I signed up for Google Plus I actually never used it. But if I HAD used it, I could stop and maybe delete the data to put it less at risk (assuming I could do that – ha!). If the vulnerability is significant enough, and not fully addressed with an appropriate patch or compensating control, then my option is to leave. And that’s one of the things Google and other providers worry about.
The Future
We need to do a few things. First, let’s stop using breach disclosure laws to punish, out or embarrass those who, in many cases are victims of crimes against them (and may have failed to protect their own customers). At the same time, let’s value privacy itself rather than only valuing economic loss directly attributable to the misuse of personal data. The real concern is not about personal privacy related data being STOLEN. It’s about it being collected for the wrong reason, and then used improperly. That’s the focus of the GDPR itself, NOT just “breach” prevention. It’s about privacy protection. Breach disclosure just scares companies to do whatever they can to prevent a breach. Privacy protection goes well beyond that. That’s one reason Congress is considering more comprehensive privacy regulation (with some breach notification thrown in.)
If you want to argue with me, look for me on Google Plus. Not saying you will find me, but you can look.
Mark Rasch is an attorney and author of computer security, Internet law, and electronic privacy-related articles. He created the Computer Crime Unit at the United States Department of Justice, where he led efforts aimed at investigating and prosecuting cyber, high-technology, and white-collar crime.