Cyber Security is NOT about cybersecurity. It’s not about compliance with regulations. At the end of the day, cyber security is about identifying and managing risks. Risks associated with the use and misuse of technology. Risks associated with failing to protect data. Risks associated with doing too little. And risks associated with doing too much. Sometimes the appropriate thing to do is to accept risk and plow forward. The goal of cyber security — when done right — is not to prevent all unauthorized uses and accesses to data. It’s not simply to prevent a data breach. It’s to take reasonable steps to reduce the risk of a data breach to an acceptable level, considering the benefits (or necessity) of collecting, storing and processing the data, and any laws, policies or contractual obligations regarding that data.

Recently, Texas Lt. Governor Dan Patrick appropriately drew heat for suggesting that, in the interests of the economy, maybe we should just let old people die. He didn’t actually say that in so many words, but he suggested that. Similarly, the President has indicated that he might go against the expert opinions of the CDC, NIH and others and relax exclusion and quarantine rules because he didn’t want “the cure to be worse than the diseases.” Is it worth ruining the economy for 350 million Americans to prevent the deaths of maybe 2 million people? Maybe 3 million?

If the preceding paragraph seems cold-hearted, callous, uncaring, and dispassionate, it’s because it is. Yet we make such calculations extrinsically and extrinsically all the time. We know that there is risk of death and injury associated with driving to work or going to the movies, but we also know that the utility of doing so far outweighs the risk. It’s a calculation. We DO balance the value of human lives (often our own) against the utility or pleasure associated with some activity. We take risks. And that’s fine. Provided that we are taking risks with our own lives and not others, and provided that there’s some meaningful way of evaluating and measuring risk. And that’s something humans are really bad at doing.

How We Evaluate Risk

When humans evaluate risk, we bring to the table a host of innate biases. For example, if I told you that there was a 30% chance of rain in both Seattle and Phoenix, the odds that you would bring an umbrella are much greater for your trip to Seattle. We factor our own biases into the risk assumption. We discount risks that are in the future. We overcount risks that we cannot see. And we discount risks we cannot measure. Measurement is important in evaluating and incorporating risk. If I told you that some pleasurable activity doubled your risk of developing pancreatic cancer, would you continue to do it? Depends. What’s the activity, and what’s its utility to me? Also, saying the risk is doubled sounds terrible — but it may mean that the risk of developing the cancer went from .0001 to .0002. And the risk of developing the disease has to be measured against the risk of the consequences (including death) from developing the disease. And we have to measure this risk in units we can understand. We also have to understand that future risks associated with current activities are also discounted. We overvalue risks that relate to a personal experience. When we drive past a part of the road where we once had an accident or got a speeding ticket, we instinctively slow down or are wary. That’s why tabletop exercises help build institutional memory more than simple lectures or training videos.

Suffice it to say that humans are terrible at assessing risk. Those who fear flying have no problem driving to the airport — typically the most dangerous part of flying. We have fears, emotions, experiences, biases, prejudices that skew how we perceive and react to risk. We are irrational and impulsive. In the area of cybersecurity, this manifests itself in many ways. First, we establish a set of guidance and standards — rules and regulations — based upon a generally accepted methodology for identifying and mitigating risk. Long passwords. Changed frequently. Monitoring. Endpoint detection. Zero privileged access. “Best” practices. We measure performance based on “compliance” with these standards. Red. Yellow. Green. We substitute general risk with risk of non-compliance. We need to look at risk from an enterprise and national level. What is my business? How do I use technology in that business? What is my critical data and what are my critical processes? What are the critical dependencies for this all to work? How would I cope or react if there was a successful attack. What is the likelihood of an attack? What is the likely impact of the attack to me? What can I reasonably do to mitigate the attack? What is the cost of such mitigation? What is the impact of the mitigation on my business? How much risk reduction to I achieve by doing this? How much risk do I accept by not?

It’s not a checklist. It’s a process. As we change business models, add new technologies, change dependencies, we have to reevaluate risk. We are not moving toward 100% tele-work and remote working? Great. It changes the risk/reward proposition. It makes us more dependent on AVAILABILITY over confidentiality. Sure, confidentiality is still important, but availability is critical. HIPAA privacy compliance is still important (and we should not change the HIPAA standards), but remember that HIPAA requires reasonable steps to protect confidentiality, but always prioritizes patient care and treatment. Always. It’s not that the regulatory standards change during crisis — it’s that what is “reasonable” under those standards change. In ordinary times, establishing an ICU in a Walmart parking lot is negligent. Now, NOT establishing an ICU in a Walmart parking lot is malpractice.

We need flexible standards for identifying and evaluating risk. Most companies who have a cyber “risk assessment” done really just have a weighted compliance checklist done. Fix the red stuff first. Fix the yellow stuff second. Fix the green stuff when you get a chance. Oh, and then there’s more red stuff. That’s fine, but simplistic. Risk is not absolute. When a particular type of hacker activity targeting a particular industry, geographic area or technology increases, your risk may increase. You want to identify your probabilistic risk. Vulnerability x likelihood of exploit x likely impact of exploit = risk. Then look at the reward associated with the “risky” activity, and the cost of mitigating or reducing the risk. Do what is reasonable.

Which brings us back to our Texas Lt. Governor. There are some legitimate risk assessment questions which can be asked here. Is the coronavirus more risky than the seasonal flu? Are the mitigation procedures more impactful than the risk we are mitigating? And even, how do we “value” each human life. As calculating as that seems, we do this every day in our own lives. We could buy a “safer” car which would marginally decrease our risk of death or injury, but we really like that 5-speed transmission, or convertible top, or big-ass SUV, or we simply can’t afford the car with lane detection and automatic brakes. We make trade-offs. Not always based on logic. But there’s a huge difference between assessing risk and telling people that grandma’s going to die so we can re-open Papa Charlies in Cypress, Texas.

At the end of the day, cyber security, like life, is not without risk. Do what is reasonable to mitigate that risk. For now, just sit on the couch. And try not to turn on the news.

 

Mark Rasch is an attorney and author of computer security, Internet law, and electronic privacy-related articles. He created the Computer Crime Unit at the United States Department of Justice, where he led efforts aimed at investigating and prosecuting cyber, high-technology, and white-collar crime.