A new vulnerability in the Android OS points out not only problems with hardware and software generally, but with our bug bounty and patch management system for mass-market consumer products – you know, exactly the kinds of products that most need a patch management program.
The fact that the critical vulnerability was discovered and reported in April (though no resulting exploit has yet to be publicly acknowledged) and patches and publicity are occurring in late July demonstrates how the process of vulnerability discovery, validation, remediation and awareness requires fixing. To a great extent, we have too many players, too much data and not enough knowledge.
In the example of the new “Stagefright” vulnerability, in addition to the security researchers who discovered the vulnerability, the players include the OS developer (Google), those who have modified the OS (typically the handset manufacturers), those who have sold the devices to the consumer or enterprise (retail outlets, handset manufacturers or wireless providers), those who “own” the handsets (corporate enterprises, individual consumers, wireless providers or device manufacturers for subsidized, leased or installment purchases), those who own the network (wireless providers for the wireless devices, corporate enterprises for data networks), and of course, the individual consumer, whether they use the device for personal purposes, business purposes (e.g., BYOD) or whether it’s a company provided device.
Couple this diffused responsibility with the fact that (A) no developer wants to admit that the vulnerability exists (not that they won’t do it, they just don’t want to); (B) there is at least as much harm caused by overestimating the severity of a vulnerability as underestimating it; (C) installing untested and unvalidated patches can cause more harm than the vulnerability they seek to repair; and (D) while no party wants the RESPONSIBILITY to patch, each party wants CONTROL over the patch process.
Providers don’t want to take responsibility for pushing untested and un-validated patches. Taking all these factors into consideration, maybe it isn’t so surprising that the vulnerability was discovered in April and the patch is coming out now.
Enterprises, whose employees may be impacted by mobile vulnerabilities, need to work with developers and mobile providers not only to be advised about new vulnerabilities and patches, but also to push these out over the air to their employees, and most importantly, to validate that the patches have been installed.
The Stagefright Vulnerability
If you haven’t been paying attention, the new Stagefright vulnerability exploits features in the Android OS which allows video files to be pre-processed on delivery to a MMS user of certain texting applications, and to a lesser extent the native Android MMS application. MMS – or Multimedia Messaging Service – is what allows users to text each other pictures, videos, files, driving directions, contacts and stuff other than plain text.
Without going into too much detail, the vulnerability takes advantage of the fact that portions of the MMS message run natively on the Android OS before the user even clicks on or executes anything. Now this isn’t really a “bug” or an inherent defect, but it’s a feature that can be used for good or for evil. Any time code can run without user input, there’s a potential for abuse. Any time code can be run WITH user input there’s a potential for abuse. Let’s face it, there’s a potential for abuse.
A weaponized version of Stagefright could then launch malware onto the phone and, well, do just about anything, because that’s what malware does. It could capture most email (not sure if it could jump into a virtualized session like Good or other apps, but it might be able to do screen captures), could take over the microphone and camera, transmit GPS or other data, and otherwise be a big mess.
It’s exactly the kind of vulnerability that hackers are looking for. Easy to execute, ubiquitous, undiscovered and gives the hacker total control.
Fortunately, it’s exactly the kind of vulnerability that security researchers are looking for too. In fact, while we know of no examples of Stagefright in the wild (not that we necessarily WOULD know about it), security researchers in April discovered the vulnerability, developed an exploit, developed a patch, and sent the whole kit and caboodle to Google, the developer of the Android OS.
And that’s where things get hairy.
Mistrust in the Research Community
Security researchers have a testy relationship with app developers, OS developers, product manufacturers, software developers and others. First, there is the whole “who died and made you God” problem. What gives the researcher the right to try to hack my product/service/network?
If someone came to your house and said, “oh, while trying to break into your house, I discovered your third floor attic window had a deficient lock,” would you pin a medal on their chest or call the police? While we have white hat hackers (hired by the company to test the product) and black hat hackers (breaking in to exploit and steal) the vast majority fit within 50 shades of grey.
Legitimate security researchers conducting legitimate research without the knowledge or consent of the software developer, but with the intent and desire to make the world more secure. Oh, and to get credit for it. Oh, and to maybe possibly make a few bucks off it. And possibly even to get a job, or reference. And some resume fodder. It’s a continuum.
So the developer naturally sees the security researcher as an enemy. Misguided at best, venal at most. How DARE you hack MY code. Then there are the Kubler-Ross 5 stages of hack response https://www.youtube.com/watch?v=0D6_msdM8rU Denial (my code is bulletproof), Anger (how dare you hack my code), Bargaining (please, please don’t tell anyone about this), Depression (if people find out about this, they’ll stop buying the product and more importantly, I’ll lose my job) and Acceptance (hmm.. looks like this IS a vulnerability). The final stages after acceptance are to state that you knew about the vulnerability all the time, and then take credit for the fix.
In the case of Stagefright, this didn’t happen. Google validated the vulnerability and the patch, gave the researchers credit, and pushed out the patch.
And that’s the other problem with vulnerability and patch management. It’s Google’s OS on a Samsung device over a wireless provider’s network in an end user’s pocket. Who has the ability and obligation to fix, and who pays?
The same is true for all kinds of other patches. In a retail environment, the POS terminal is developed by company x with software by company y, delivered by company z maintained by vendor a, with the help of consultant b, and used by franchisor c, a franchisee of company d.
The POS terminal feeds into a CRM database developed by company e but maintained by company f using the same passwords as the POS terminal at the advice of consultant g.
Complexity is the enemy of security. And “shared responsibility” all too often means no accountability.
So with the Stagefright vulnerability, Google could validate the vulnerability and validate (or develop) a patch, and push it out to the hardware manufacturers. Now while there are many “flavors” of Android from Cupcake and Donut to KitKat and Lollipop, every handset manufacturer (frequently in cooperation with the wireless providers) modifies the OS to suit its needs. That’s what makes a Droid phone different from an LG G4 or Samsung Galaxy. These variations themselves may expose new vulnerabilities, too.
So even “ownership” of the OS is distributed, with various developers owning various versions, flavors or modifications.
So who has the (1) ability, (2) responsibility, (3) liability, and (4) cost assumption for actually patching your Android phone? Could be Google. They can validate the patch, but don’t have access to your phone. They would have to push out the patch to your phone as an Over The Air (OTA) patch.
This means that the user would have to either accept automatic updates, or agree to install the patch. That’s the easiest way to manage patches of the unmodified versions of Jellybean or KitKat or Lollipop, but the hardware developers might have to both validate, or push out the patches to individual users.
And that’s another problem. Imagine an OTA update on 1,675,450,000 phones with an additional 355,000 being activated every day. The wireless providers would bear the bulk of the costs associated with pushing out these updates (assuming the users didn’t always update on WiFi) and for a critical and highly public vulnerability like Stagefright, it may involve hundreds of millions of simultaneous OTA updates.
So who pays for that? And who pays if the update “bricks” the phone, or interferes with functionality, destroys a critical app, releases other data, or creates some other hitherto undiscovered vulnerability or data leakage?
Enterprise operators – companies that deploy thousands of mobile devices for their employees – there’s a shared responsibility between the OS developer, the wireless provider, the device manufacturer, the enterprise and the employee.
The enterprise operator wants not only to push out a TRUSTED patch OTA via the wireless provider, and to validate that the patch has been installed; it wants to do this in a way that does not encourage users to willy nilly install things onto mobile devices that have not been tested or validated, and in a way that doesn’t use the patch management process as a vector for either social engineering or remote malware distribution.
In general, the party most responsible for the “defect” (and I use the term defect in quotes because not every exploit results from a defect) bears the cost of repairing that defect, and the party most able to prevent the harm bears the obligation to do so. So the party most responsible for the vulnerability is probably Google, but the party best able to fix it (or install the fix) is the user. And the party that bears the cost to the network is the wireless provider.
The easiest and best solution is to not have buggy code. Plan A. But that’s easier said than done. Because bugs are in the eyes of the beholder. There’s a certain functionality to allowing preprocessing of MMS messages.
So plan B is to test code. And then retest it. And then again before it is launched. After it is launched and even after obsolescence. Remember, the cost of a patch is infinitesimal compared to the cost of an exploit, and the cost of an exploit is infinitesimal compared to the cost of a breach. Especially for a vulnerability like Stagefright which can take over the entire functionality.
Plan C – after you test it, have others test it — independent researchers, third parties, and hackers. Your second cousin, twice removed. Have a robust bug bounty program that provides proper incentives for hackers to tell YOU about the bugs, rather than just exploiting or selling them.
Plan D – monitor hacker boards to find out if you have been PWNED. Just in case your bug bounty program isn’t lucrative enough.
Plan E – Do something. When you find a vulnerability, um… FIX it. Well, where appropriate. Validate it, evaluate it, and fix it. Sooner is better than later.
And finally, help your customers help you. Even with all of the publicity and FUD surrounding the Stagefright vulnerability, it is still estimated that only about 50% of users (and that number is exceedingly high) will actually apply the patches. But don’t panic. Not yet. Verizon’s 2015 Data Breach Investigations Report (DBIR) showed that only a tiny fraction of one percent of all hacks involved mobile devices (including mobile phones and tablets.) Prioritize risk and vulnerability. Don’t react out of fear, but don’t fear to react.
Stagefright is real. And it’s frightening. But don’t let it keep you from the stage. Just step up and do the right thing. And work with partners to figure out what that is and how to get these patches out… and validated and installed.
Mark Rasch is chief security evangelist for Verizon Enterprise Solutions where he is responsible for advancing the company’s global security solutions position. Mark is an attorney and author of computer security, Internet law, and electronic privacy-related articles. Formerly, Mark developed the Computer Crime Unit at the United States Department of Justice, where he led efforts aimed at investigating and prosecuting cyber, high-technology, and white-collar crime.