Inevitably, after a major data breach, when a company disclosed the fact of the breach, security professionals question the timing of the disclosure. “Why did you wait so long to make a disclosure?” is the outcry! Sometimes, like in the case of Uber, which delayed notification for almost a year, the criticism is warranted. But even when the delay is relatively short – days or weeks – there is an outcry about the “delayed disclosure.”

Sometimes you shouldn’t disclose a data breach at all. Ever. In fact, sometimes, disclosing a data breach can open up the company, its directors and officers, and its CISO to liability.

Handling a data breach is fraught with peril. Indeed, there is no “right” way to do it. Most of the time you are picking among the least wrong ways. But is it even necessary or right or helpful to make a data breach disclosure at all?

Since the California legislature passed SB 1386 more than a decade ago, much of the focus on the protection of data breaches (and the response to them) has focused on data breach disclosures. More than any other “security event,” a disclosable data breach is more likely to lead to PCI “fines,” (technically contract remedies), FTC enforcement actions, class action lawsuits, damage to corporate reputation, and most significantly – termination of the CISO or other staff. (“Gentlemen.  We have to keep our phony baloney jobs!!”)

Companies are understandably reluctant to disclose a data breach. In fact, the antipathy toward disclosure is now seen as a huge driver for information security itself. We protect data not so much because it’s the right thing to do, or because our customers expect it, or even because regulations or standards demand it. We protect data because of the unseemly consequences of failing to protect it. With the adoption of data breach disclosure requirements in the upcoming EU GDPR laws/regulations, we treat data breach disclosure as the reason we have security laws.

The truth is almost the exact opposite

Data breach disclosure laws were actually intended to assist companies that suffered data breaches in assessing and mitigating the harm resulting from the data breach. This philosophy is reflected in the almost bizarrely worded definitions of “personal information” contained in the data breach disclosure laws themselves. For example, according to California’s data breach disclosure law, “Personal information” is defined to include an individual’s first name or first initial and last name in combination with any one or more of the following data elements:

  • social security number;
  • driver’s license number or California identification card number;
  • account number, credit or debit card number, in combination with any required security code, access code, or password that would permit access to an individual’s financial account;
  • medical information;
  • health insurance information; and
  • information collected through an automated license plate recognition system.

At first blush, this seems like a rational definition. But look more closely. If a hacker obtains a list of social security numbers, or a dump of medical records or health insurance information, this data is technically not reportable UNLESS it also includes the individual’s last name AND first name or initial—no matter how trivial it is to reconstruct that information. So, if you breach my medical records, and my home address, and my last name, and my social security number, and my driver’s license number but NOT my first name or initial, under the strict wording of the statute, no breach notification is required.

Bizarre

Or not so bizarre when you consider the purpose of the statute. When IDENTIFIABLE personal information is breached, someone has to do something to mitigate the harm resulting from the breach. In the movie Apollo 13, when Gene Krantz is told that the capsule is coming in shallow and asked whether they should inform the astronauts, Krantz asks if there is anything that the astronauts can do about it. When told, “no,” he replies, “then they don’t need to know.”

Data breaches are a lot like that. We inform customers so they can do something about the breach—maybe cancel their credit cards, maybe look over their statements more carefully, maybe put themselves on a fraud notification or credit freeze watch. We do this to keep them from being harmed from our breach, and therefore to limit our damages, and to enlist the consumers’ help in mitigating our damages.

As the nature and character of the data subject to breach changes from just credit card numbers to intimate personal data, our desire to know about breaches may increase, but our ability to do anything to mitigate the harm resulting from the breach decreases.

That’s why the definition of “personal information” in these data breach laws is so, well, bizarre.

Assume you are managing a data breach for a company which involves the kind of data I wrote about earlier—really sensitive data like medical records or social security numbers, but NOT the first and last name, or last name and first initial of the data subjects. Do you disclose the “breach” or not?

No good deed goes unpunished

Obviously, you try to do the “right” thing. That is, the right thing for both the company you work for and for its customers. You know that making a disclosure when you shouldn’t (or when you don’t have to) may cost the company tens of millions of dollars in investigative, litigation, compliance, reputational, and other costs—sometimes with little benefit to the customers.

On the other hand, failing to make a disclosure where required can add FTC and consumer protection litigation, fines and even more bad publicity. But what about failing to make a disclosure where the law suggests, but does not actually mandate disclosure? What’s the right thing to do there?

So, here’s the problem. If you fail to disclose a breach when you are required to, you may suffer fines or administrative action. If you fail to disclose a breach when you are not required to, but should do so to protect your customers, you will lose consumer confidence and may actually suffer greater harm.

But if you make a disclosure when you are not required to, you may also suffer liability 

We recognize that breach disclosures are expensive. There’s the cost of the actual disclosure itself (mailings, email blasts, public relations, etc.). There’s the cost of the breach investigation, including lawyers, forensic teams, and investigators. There’s the cost of mitigation, such as credit monitoring, etc. And there’s the cost of fines and reputational damage as well. The average cost of a breach disclosure runs from $4 million to $7 million, depending on which online figures you believe.

So, if you disclose a breach that you are not required to disclose, you may have just caused your company to pay $7 million unnecessarily. And that unnecessary expenditure may lead to what is called a shareholder derivative lawsuit, where shareholders sue the managers (or directors) of a company for either fraud, deceit, or in this case, mismanagement of corporate assets, which would be both the money spent on the breach notification and the information asset itself. Disclosing a breach that you aren’t required to disclose (or disclosing prematurely) may actually expose the company to this liability.

And this is why breach management is so hard. There’s a tiny window between disclosing too soon and disclosing too late. Between disclosing too much and disclosing too little. Between doing too little for mitigation and doing too much. And every decision will be second guessed by management, investors, analysts, regulators, plaintiff’s lawyers, and journalists.

The goal is to find the sweet spot. You know you have done so if, after the incident, you don’t find yourself updating your resume. At least not right away.