I recently read an article in Wired about problems with a particular airport security body scanner.
Apparently, semi-clever attackers can conceal guns, knives and explosives in a way that makes them virtually undetected by the now-discussed Rapiscan detector previously deployed at thousands of airports, and currently deployed at courthouses and other places near you.
That a security technology might not work is hardly surprising – after all, it is the goal of the attacker to find a vulnerability, and virtually any technology can be defeated in the end. No, the problem here is hubris.
My colleague, Bruce Schneier described much of what we do to prevent attacks – whether physical, virtual or logical – as “security theater.” That is, the goal of the security is NOT ultimately to prevent or even deter attacks.
Rather, the goal of much of security is to give the consuming public a false sense of security so that they continue to have (false) confidence in the infrastructures that underpin society. If people did not believe banks were secure, we would see a massive increase in mattress sales. If people thought that airplanes were not safe, they wouldn’t fly. It is the perception of security – not its reality – that governs.
Years ago (before 9/11) I was flying through Houston, Texas. I had bought my ticket that day, so my boarding pass bore the dreaded SS notation. Not the German Schutzstaffel, but almost as bad, “Special Screening.” After passing through the metal detector and handing my boarding pass to the (pre) TSA agent, the agent then proceeded to “wand” me – to scan my body and semi-private parts with a wand designed to look for metal – presumably weapons. Remember, I had just walked through the metal detector.
I inquired of the earnest agent scanning whether it was possible for me to have had a weapon that could be detected by the wand, but not by the metal detector, and he assured me that it could. In fact, he said that the metal detector was hardly reliable at all. Its main purpose was to assure the travelling public that SOMETHING – ANYTHING was being done. Hardly inspiring confidence.
On another pre-9/11 trip, I was flying from DC’s National Airport to New York’s LaGuardia airport on one of the shuttles. At the time, the two shuttles took off from gates across the hall from each other, one on the hour, the other on the half hour, and they each honored the others’ tickets and boarding passes.
So if you got to the gate at 9:15, you would fly on the 9:30 shuttle; at 9:40, take the 10:00 AM. Very civilized, actually. My trip to NY was only a few hours, so I had no luggage. I mean NO luggage. No briefcase, no laptop, nothing. Just me. I step to the counter and get a boarding pass, and walk three feet to the screening.
The earnest (pre) TSA agent then proceeds to ask me, “Did you pack your bags yourself?” To which I truthfully reply, “I have no bags.” The agent was unprepared for this bit of truth. She then said that I had to answer the question, to which I again replied, “I just did.” Working off a script and applying no “common” sense, the agent then told me that I was required to answer the question “Yes or No.”
I politely asked which answer would get me on the plane. (Yes, I am a bit of a wise guy but very polite.) She told me that I she was not permitted to tell me that. So I said, “I have no bags. But… if I DID have bags, I WOULD have packed them myself, so the answer is YES.”
You know what happens next. She then earnestly and without a trace of irony asks, “and have they been with you the whole time?” I asked, “have WHAT been with me the whole time?” She said, “your bags.” I again said, “I have no bags.” Or in the words of Abbot and Costello, “third base!”
Anticipating her next move, I quickly added, “but, if I DID have bags, and if I DID bring them to the airport, and if I DID take them on the plane, I WOULD HAVE had them with me the whole time, so…yes.”
She seemed satisfied with this answer until the next question. “Has anybody unknown to you given you ANYTHING to take on this plane?” Looking down and seeing nothing in my hands but a boarding pass, I felt compelled to say “Yes. The only thing I am taking on this plane is this boarding pass which was just handed to me by that woman over there, a person [previously] unknown to me.” I thoroughly expected a visit to Dr. Happy Fingers, but the security agent let me pass.
Security is NOT a laughing matter. Bad security is even worse. Bad and stupid security is even worse.
The Rapiscan case presents the trifecta of bad security. It’s expensive (money, training, time, maintenance, etc.) invasive (of privacy, time and data), and ineffective. Sounds perfect.
What’s worse, as with most security technologies, the owners and operators of the technology take a predictable track to responding to revelations of vulnerabilities.
o We don’t have a problem. The technology is fully tested, safe and secure. You’re lying.
· Deny Again
o Our technology doesn’t collect that kind of data. We respect privacy, and could NEVER collect that kind of data.
· Deny a third time
o Sure we collect that data. But everyone KNOWS it. And they shouldn’t have an expectation of privacy in that data.
· Chicken Little
o If we didn’t do this, the sky would fall. We HAVE to do what we are doing the way we are doing it to prevent [indicate disaster here]
o The vulnerability you described could NEVER be exploited.
o You can’t tell anyone about the vulnerability or the exploit. This would cause severe harm. (sometimes accompanied by a letter or Court order mandating secrecy.
· Lawyer Up
o In order for you to have found the vulnerability you had to (A) violate our copyright; (B) violate our terms of service; (C) lie, cheat or steal; (D) do some other horrible thing. So you can’t tell people our technology doesn’t work or we will (or have) sue.
o Of course we knew about this vulnerability. Why, we knew it all the time. In fact, we were planning on fixing it in the next release.
· Take Credit
o Issue a press release about the new “more secure” release!
This happens in government, industry and everywhere. And that’s a real problem for security technologies. In the Rapiscan case, the company denied that the technology was privacy invading, and repeatedly (and falsely) assured the public that the images couldn’t an wouldn’t be saved (all the while publishing images on its website which it said couldn’t be saved!)
Rapiscan next tried to keep the researchers from publishing their results, saying that doing so would empower terrorists, defeat the point of the scanners, and harm national security. Um, that’s cause terrorists could NEVER in a million years figure out that using materials that mimic the human body might defeat a scanner.
Whew. I feel safe.
This reminds me of DHS hiring Hollywood screenwriters to come up with attack scenarios because they lacked the creativity to think of what a terrorist might do.
We need to stop. Right away.
No security technology is foolproof, and we should EMBRACE truth, knowledge and improvement. If a technology has flaws – fix them or accept them, and find a work around. If a technology invades privacy – tell the public and ensure that there’s a dialogue. An example is the fact that the FBI is planning to spend more than a Billion (yup, with a “B”) dollars on some unspecified facial recognition technology.
Again, it’s expensive. It doesn’t work. It invades privacy. The trifecta.
It’s one thing to have security theater. It’s another thing altogether to have bad theater. Time for a rewrite.