All of our critical computing devices have a Jar Jar Binks processor. That is, they have little common sense, they’re easy to manipulate, and they’re unable to distinguish good from bad.
Just like the floppy-eared version in Star Wars, Jar Jar Binks processors are well-meaning but incredibly easy to trick. This is exactly what allows cyber attackers to manipulate software of all kinds—even security software—to wreak havoc on their targets. And unfortunately for all of us, cloners are making billions and billions more of these processors each year.
It’s obvious to almost anyone who reads the news that something needs to be done about cybersecurity. But what a lot of people forget is that security wasn’t a technological focus until very recently.
For many decades, we have been on an unbelievable tear of silicon advancement to make smaller, faster, cheaper products. And because connectedness only recently permeated our computing culture, security was largely ignored during the silicon boom.
Even when security did start to become an issue, fire walls and virus scanning were initially enough to do the trick. It wasn’t until the infamous Stuxnet attack in 2008, a cyber offensive that destroyed 2,000 Iranian centrifuges, that we woke up to the possibility of destroying physical equipment half a world away.[1]
Just like that, we were off to the security races. But instead of stepping back and thinking about how we might best protect our technology, we panicked and ran with what we had. A bunch of people tried to write good software to fight bad software. And now, because that approach has become the status quo, we have to constantly find and patch all the bugs and vulnerabilities in our tangled webs of defense.
The reality is that we cannot eliminate bugs from software; it’s a human-driven process and, last time we checked, human perfection isn’t really attainable. Yet we keep trying to protect our systems with more and more software — all of which has bugs.
Attackers are onto this, and they often set out to attack the defense software itself! Meanwhile, the hacking business is booming. Attackers see lots of money being made through cyber attacks (especially ransomware), so more and more are getting into the game.[2]
We are losing the battle.
Luckily, it’s not all gloom and doom. As an industry, we’re slowly waking up to the fact that security at the software level is not enough. We’re recognizing that we have to do something that accounts for the fact that there will always be bugs and attacks. And we know that we must protect our systems from subversion. But…what to do?
It’s simple, really. We have to start building our security at the processor level.
First, let’s dispel an industry misnomer. When a vendor says they have a “secure processor,” what they mean in today’s world is that they have added encryption and maybe key management to a standard processor. This is actually “communication security,” meaning that users are being helped to make sure any data going to and from a device is encrypted. This makes data theft or exfiltration very difficult for an attacker. Encrypting communication is good, and in some situations vital. But it doesn’t really warrant a “secure processor” label.
What we need to do is build real security at the processor level. Bona fide, comprehensive, unsubvertible, hardwired-in-the-device security.
We know attackers are finding and exploiting bugs in programs and then tricking the processor into executing injected instructions. Since the processor has no idea how to distinguish between good and bad instructions, it can’t enforce simple security rules.
So, we need a way to provide more knowledge to the processor about what is going on and what was intended by the programmer. If we can provide the right kind of information so that the processor knows the rules, then it can enforce those rules and thwart the bad guy.
Take a common example: 80% of cyber attacks involve hackers overrunning buffers to inject malicious instructions.[3] These buffer overflow attacks blindside today’s processors in two ways. One, the processor is unable to differentiate between intended and unintended inputs, and two, it is unaware of when and where a given buffer should end. This makes the processor powerless against the attack.
But a truly secure processor with a built-in hardware security mechanism can block the execution of any unintended input and prohibit the overflow of buffers.
Essentially, the processor can become Obi-Wan Kenobi.
“So we need to change all the world’s processors?”
Well, no. Changing mass-market processors is a non-starter. Can you imagine the herculean effort it would take to replace every single one? Impossible.
But what we can do is enable processor makers to add silicon-based security designs to their existing processors. This approach enables existing processors to know a programmer’s intent, and to stop bad instructions before they cause any damage.
The tricks that ransomware like Wannacry play on processors would fail, disks full of precious data would not get locked up, and the attackers would not make billions of dollars.
It’s time to revamp our Jar-Jar Binks processors, as well meaning as they may be. It’s time to transform them into Jedi Knights.
Jothy Rosenberg is the Founder & CEO of Dover Microsystems and a fan of the good guys in Star Wars. After seven years of researching and developing processor-based security, he founded Dover, his ninth startup.
[1] Source: WIRED (https://www.wired.com/2011/07/how-digital-detectives-deciphered-stuxnet)
[2] Source: Fortune (http://fortune.com/2015/05/01/how-cyber-attacks-became-more-profitable-than-the-drug-trade/)
[3] Source: MITRE (https://cwe.mitre.org/)