Both conservatives and liberals are convinced that “mainstream” social media “censors” their views and opinions. Liberals point out that so-called conspiracy theories like those peddled by Q-Anon, white supremacist and white nationalist organizations get traction from social media and are amplified by their algorithms, and that as a result, more conservative viewpoints are expressed online than progressive viewpoints. Conservatives point out actions by Facebook and others to restrict the dissemination of what Facebook deems to be “false” information, which they believe served to disenfranchise conservative views. While Mark Zuckerberg initially indicated that he would not restrict any viewpoints on Facebook — indicating that he wanted to be a forum for all ideas not a publisher of them — over time the social media site has taken on more “publisher” like features.

As a result, in the wake of the 2020 Presidential election, a large number of mostly conservative social media users have indicated that they were moving to a less-restrictive and more open (and more openly conservative) social media site – Parler. That site, unlike the “mainstream” sites like Facebook, Twitter and the like, does not censor viewpoints.

Or does it?

If you look at Parler’s Guidelines it’s all about content restriction. The first principle is that:

Parler will not knowingly allow itself to be used as a tool for crime, civil torts, or other unlawful acts. We will remove reported member content that a reasonable and objective observer would believe constitutes or evidences such activity. We may also remove the accounts of members who use our platform in this way.

Just so we are clear, the policy as stated notes that Parler will remove not only things that facilitate crimes or torts, but also “evidence” of such activity. So people can’t advertise unlawful gun sales, sales or sharing of drugs in violation of federal or state law, encourage people not to pay lawful taxes, promote unlicensed sports betting, solicit participants in unlawful assemblies, or any of hundreds of other things because these are “unlawful” and their posts and sharings will be taken down. They also cannot post information which would intrude into the seclusion of others, put others in a “false light,” defame or libel them, cause them severe emotional distress, or otherwise constitute a civil tort.

The policy is not unusual, and reflects Parler’s desire both not to facilitate crimes or torts and not to be potentially liable for such facilitation. It’s a reasonable policy depending on how it is applied. But it’s not the absolutist free speech position that advocates for the alternative social media site claim it to be.

Parler also indicates that it will restrict “content posted by or on behalf of terrorist organizations,” giving it the unilateral decision-making ability to decide whether the Proud Boys, the Knights of the Ku Klux Klan, or the Atomwaffen Division are “terrorist” organizations. If Parler determines that you are a member of such an organization, or that anything you post is on behalf of one – well, no soup for you! The same is true for prohibited content like child pornography (called CSAM), but also those memes and photos you post and repost? Sorry, dude. Copyright violation. You’re out. They also restrict materials that are “not safe for work” with an age validation and verification system.

There’s nothing strange or unusual about any of these restrictions. They are not “censorship” in the governmental sense because Parler, like Facebook, Instagram, and Twitter are not government agencies and have no legal duty to “carry” anyone’s message. Blocking spam and bots by ISP’s, providers and social media is not “censorship.” It’s responsible. The World Wide Web is not, and should not be a total free-for-all. It’s a community. With rules, and with people who violate these rules.

The Role of Providers

In The Wolf of Wall Street, the protagonists work for a company called Stratton-Oakmont – a wall street hedge fund and massive fraud scheme. Back in the mid 1990’s, shortly after the Internet was commercialized, services like America OnLine, Prodigy, and Compuserve provided dual functionality — they hosted and moderated user-created content through various forums, message boards and the like, and they acted as a dial-up gateway to the web itself. Prodigy hosted a finance message board (“Money Talk”) on which people posted information about Stratton Oakmont — and that information was not particularly favorable to the investment company. Stratton Oakmont sued the forum – Prodigy for defamation, asserting that Prodigy, like the New York Times, or the Wall Street Journal “published” the defamatory materials and was liable for the libel. They had the ability to read and cull the content (just like letters to the editor), and to moderate the forums, and in fact did moderate content, using what they called a message “board leader.” As a publisher, they should be responsible for the content they publish, irrespective of who wrote it. The New York State Supreme Court agreed, and found that Prodigy was, in fact, a publisher and was liable to Stratton Oakmont for the allegedly defamatory postings made by users of the forum.

So, apply that ruling to Facebook or Parler. Every time someone doxxes someone else online, posts mean messages, insults or lies, Parler would be liable for the tortious conduct of its users. It would either have to read every message and determine its truth and character before hand, or respond to demands to take down content. It would become a publisher of its members’ content in the true sense, and would take on not only the editorial function, but liability for breaching that function.

In response to the Stratton Oakmont case, Congress passed Section 230 of the Communications Decency Act (CDA) which generally gives Internet Service Providers and online content providers immunity from suit (not just immunity from a judgement) which asserts that they have liability as a publisher of offending content. Without going into a deep dive into the scope of Section 230, it is this provision which has permitted the growth of social media and user provided content — for good and ill. The benefits of Section 230 is that it permits and encourages forums like Parler and Facebook. The problem is that it disincentives content moderation or moderation (in all senses of that word) at all. Whether and how to moderate content is left to other laws (e.g., the child pornography laws) or more frequently to the marketplace itself. It also means that, if someone else posts content that is offending, improper, injurious or harmful (and sometimes deadly), it is exceedingly difficult to have that content removed, or to hold anyone responsible for that content. It also permits and encourages the coarsening of political discourse (on all sides), and irresponsible but protected speech.

There also may be a distinction between a forum’s liability as a “publisher” of a third party’s content and as a distributor of that content, or between a forum’s liability for truly third party content and for content that they create themselves. If a provider is liable when it exercises an editorial function (e.g., blocking some content but allowing others), then it would be encouraged to block nothing except that which it is legally mandated to block. In for a penny, in for a pound. That’s not the approach taken by Facebook. Or Parler.

Since its inception, there have been efforts to soften, weaken, modify, or exempt content from Section 230’s almost blanket immunity. On October 13, Justice Clarence Thomas issued a non-binding opinion questioning whether companies that provide a forum for content should be legally entitled to Section 230 immunity, noting that “a company can solicit thousands of potentially defamatory statements, “selec[t] and edi[t] . . . for publication” several of those statements, add commentary, and then feature the final product prominently over other submissions—all while enjoying immunity.” Justice Thomas went on to note that “by construing §230(c)(1) to protect any decision to edit or remove content, courts have curtailed the limits Congress placed on decisions to remove content. … With no limits on an Internet company’s discretion to take down material, §230 now apparently protects companies who racially discriminate in removing content.” Justice Thomas criticized decisions which, for example immunized content forum Backpages for the escort ads posted on its site, or permitting content on Facebook in which it was alleged that Palestinian organization Hamas used Facebook to post content that encouraged terrorist attacks in Israel. Justice Thomas suggests, like many conservative commentators and President Trump, that Section 230 immunity be pared back, permitting forums like Facebook and other “Big Tech” companies to be sued for the content posted by others, and for their own actions in filtering, (or not filtering), promoting, or excluding content. When an algorithm causes a user to see content based on their “interests” and that necessarily radicalizes them in one direction or another, does the creator of that algorithm bear responsibility for the consequences of that radicalization? Does it matter if that radicalization leads to a school shooting, a terrorist attack, or someone bringing a gun to a pizzeria? Should social media be liable for NOT detecting and reporting content relating to mentally disturbed individuals who threaten to kill or harm others? Should they take such content down? Should they have liability if they do? Should they have liability if they don’t?

Again, everyone is convinced that Big Tech is prejudiced against them and filters THEIR content while permitting that of their adversaries. There are some tweaks to 230 I would like to see — to bring the law of posting malicious and harmful content more in line with the law on posting infringing content. But for now, if everyone is unhappy, maybe the problem is not the forum, but the people. And that’s kind of what free speech is all about.

 

Mark Rasch is an attorney and author of computer security, Internet law, and electronic privacy-related articles. He created the Computer Crime Unit at the United States Department of Justice, where he led efforts aimed at investigating and prosecuting cyber, high-technology, and white-collar crime.