This week the White House felt the need to formalize statements the President has made on responsible disclosure. They did so through a blog post penned by Michael Daniel, Special Assistant to the President and Cybersecurity Coordinator.

Daniel acknowledges the issue, partly highlighted by the insinuation that the HeartBleed bug may have been known and used by the NSA that came from the Electronic Frontier Foundation:

“While we had no prior knowledge of the existence of Heartbleed, this case has re-ignited debate about whether the federal government should ever withhold knowledge of a computer vulnerability from the public.”

He goes on to spell out the quite logical and I would say reasonable criteria that the White House, and presumably by extension, the US Government, will use in future to determine if a discovered vulnerability will be disclosed. These criteria include the answers to:

  •     How much is the vulnerable system used in the core Internet infrastructure, in other critical infrastructure systems, in the U.S. economy, and/or in national security systems?
  •     Does the vulnerability, if left unpatched, impose significant risk?
  •     How much harm could an adversary nation or criminal group do with knowledge of this vulnerability?
  •     How likely is it that we would know if someone else was exploiting it?
  •     How badly do we need the intelligence we think we can get from exploiting the vulnerability?
  •     Are there other ways we can get it?
  •     Could we utilize the vulnerability for a short period of time before we disclose it?
  •     How likely is it that someone else will discover the vulnerability?
  •     Can the vulnerability be patched or otherwise mitigated?

There are several problems with a blog-post-as-policy.

The first problem is evoked by the famous quote from Friedrich Nietzsche:

I’m not upset that you lied to me, I’m upset that from now on I can’t believe you.

Words will not suffice to ameliorate the loss of trust in the US government and the intelligence community by the vast surveillance state they have created with some rather draconian twisting of the law and words. Action must be taken.

An appropriate action would be to start revealing a few zero day vulnerabilities. As a start they could reveal the vulnerabilities that are used to exploit Juniper, Cisco, and Huawei firewalls and routers with the BANANAGLEE and ZESTYLEAK, root kits and the vulnerabilities that the DEITYBOUNCE exploit kit uses on Dell servers or IRONCHEF on HP Proliant servers.

In an interview with Michael Hayden I got into the criteria, at least in his day, that the NSA uses for determining whether or not to disclose a vulnerability. He described NOBUS: Nobody But Us, which certainly did not include the other criteria itemized by Mr. Daniel.

“In cryptology, both offense and defense revolve around the concept of vulnerability.  When vulnerability is discovered, the stark choice is to exploit it ( providing “security” by penetrating an otherwise inaccessible target) or to patch it (providing “security” in a more direct and traditional way).  NSA is responsible for both (and operationally that is a VERY good idea).  The SIGINT division plays offense; the Information Assurance division plays defense.  And in making a decision which way to play, a very powerful consideration is always who else has knowledge of, or the ability to exploit, the weakness.  Some vulnerabilities are such that they marginally (but importantly) weaken a system but exploitation still depended on skills, systems and technologies that few, if any, can match.  If the judgment is what is called NOBUS (nobody but us could do this), the risk management decision is pretty easy.  Of course, that judgment could change over time and still requires continuous due diligence.”

Which brings up the other problem with his blog post. Would the Cybersecurity Coordinator even have a chance to apply his questions to a vulnerability discovered by the NSA? It would be classified and could lose any value they have against targets if they were leaked or stolen by exposing them to White House review. What procedures are being proposed  for vetting vulnerabilities based on these questions?

One final issue is the well-known program the NSA has of purchasing exploits or contracting out the development of exploits to defense contractors. If the NSA is encouraging the creation of exploits, and paying good money for them, how does that jibe with transparency and responsible disclosure? Does the Software Assurance Directorate of the NSA have similar programs in place to preempt the Signals Intelligence Directorate?

Mr. Daniel has described the issues about vulnerability discovery and the US Government, now it is time for the intelligence and policy community to act.

Leave a Reply