One of the most useful things to me in trying to secure an enterprise like Columbia University is information, and the more information, the better. This means that for most of the time that I am not in meetings, I sit and read.
Most of my input overload comes in the form of emails, approximately 800 to 1,000 per day. I don’t claim to read all of them, and a good number are vendor solicitations. (As an aside, what is up with the vendors calling before sending out an email? I’m up to almost 10 calls a day asking if it would be OK to send me a white paper on how their product will solve my problems. I guess they don’t want me reporting their email as spam.) Another non-trivial number are online magazines; these I almost always read, not cover to cover, but any articles that catch my eye.
Recently, I saw an article from Government Technology called How Terrorists’ Use of Social Media Points to the Future. I read it a few times to make sure that I understood the subtle message that the author was trying to convey.
Now, maybe I’m missing the point (I often do, just ask my wife – J ), but I sure did get the impression that the author was hinting around the idea that there should be some kind of system in place to censor what is being said on social media when the end result was terrorism.
To be perfectly clear, I do not support terrorism in any size, shape or form, and I do not believe that our media, social or otherwise, should be co-opted by the terrorists for their own purposes. However, I do have a problem with censorship.
Social media is a very fast moving stream of consciousness, so the only way to effectively monitor and restrict its content would be to develop an automated system that would block content that was “socially unacceptable” in the eyes of (and this is where I have a problem) the programmer.
I may be going off the deep end again, but I do know a little about programming. The best systems for natural language recognition are far from perfect – once you start using them to shape the information flow, I believe, that you will be going down a road that I am not sure that many of us will want to be traveling on.
I personally like my information in as raw a form as possible. I get my security email unfiltered, which is another reason that there is so much of it. One of my pet peeves is when I send a report to a security or abuse address somewhere about a compromised account that is sending us SPAM and I get it back telling me it was not delivered because it is SPAM … duh.
I really don’t have any problem with a post being taken down, by a human, when it is violent or an advertisement to join our cause and help us destroy society, but having an automated filtering system do this on the fly is a step that I would not want to take.
The problem with slippery slopes is that once you start sliding, it is very hard or impossible to stop.