On June 17, the Department of Justice released its proposed guidelines for making social media entities and carriers liable for the contents of what third parties post on their sites. The proposal would make these providers liable for not censoring content, and liable if they do.
The proposal would make social media sites like Facebook, Twitter, TikTok, Instagram, YouTube and other liable if they “facilitate or solicit content that violates federal criminal law or are willfully blind to criminal content on their own services.” The term “facilitate” could simply mean that they allow others to post that content. The term “willfully blind” could mean that they simply don’t monitor what people post.
So the DOJ proposal would make social media liable for third party content if that contend “violates federal criminal law.” That’s a lot of ground to cover. It covers content which is threatening or harassing, which is intimidating, which solicits any crime, which furthers a mail or wire fraud, which contains a material false representation within the scope of any agency or department, which can be used to file a false claim against a government agency, which constitutes an unregistered securities offering, which constitutes a violation of the deemed export rules for controlled commodities, which improperly discloses credit reporting data, which violates copyright law, which infringes on a trademark, land when the posting constitutes an overt act in furtherance of any conspiracy to violate any law of the United States. So TikTok, in the name of free speech and open media would have to read every posting by every person and make a determination whether the posting violates any federal criminal law. State law, of course be damned, right?
Helpfully, the DOJ proposal recommends “a case-specific carve out where a platform has actual knowledge that content violated federal criminal law and does not act on it within a reasonable time, or where a platform was provided with a court judgment that the content is unlawful, and does not take appropriate action.” Actual knowledge? What does THAT mean? If I write to complain to Facebook about someone else’s posting, and cite some federal criminal statute (you know, all those people who cite the Berne convention and the fact that it requires Facebook to take down their postings) does that constitute “actual knowledge?” What if the FBI tells Facebook that the New York Times article about the John Bolton book violated the espionage act? Is that “actual” knowledge, sufficient to mandate that Facebook must take the link someone else posted to the New York Times down? What if Facebook or Twitter doesn’t believe that the thing posted violates federal law? It also forces social media sites to determine the intent of the poster — did the poster act with intent to harm? Did they have knowledge? Nice way to both deputize social media as part of law enforcement, and to make them the thought police. If the FBI wants something on Facebook taken down, they can go to the Court and get an injunction. But the Court won’t grant it in most cases.
Which brings up the next point in the DOJ “guidance.” DOJ wants to impose criminal liability on these providers “where a platform was provided with a court judgment that the content is unlawful, and does not take appropriate action.” Note, it’s not that DOJ wants to permit the Court that issued the judgement to issue a show cause order or a finding of contempt. No, that would be too easy. The DOJ proposal would make the platform criminally liable for the underlying crime about which the poster is communicating. So if the underlying poster is posting information in furtherance of a billion dollar drug deal, and YouTube doesn’t take it down, YouTube would be liable (as a principal? For aiding and abetting?) for being a drug dealer. Moreover, the DOJ proposal does not require that the platform have been a party to the case in which the order was obtained, or that they have had notice or an opportunity to be heard, or even that the Court that issues the order have jurisdiction over the platform. Nope. Anyone (a litigant, a divorce lawyer, the FBI, some aggrieved politician) can get an order from any Court to have a platform remove any content that the Court considers “unlawful” (and here it could be federal or state law, or some municipal ordinance), and the platform has to remove the content or face criminal prosecution. Of course, Courts right now can issue orders to remove content, and can enforce them with contempt sanctions — if certain criteria are met. This proposal would remove those criteria and enhance the sanctions.
THe DOJ proposal would next require platforms to both remove and not remove content or block or not block participants based on a “good faith” adherence to some published content moderation guidelines. If the platform blocks content for reasons other than those published, it would face civil and criminal liability. If the platform fails to block content persons when they violate the published guidelines, they would also face civil and criminal liability. Thus, if a platform did not state that they would remove content that constitutes a threat to people’s life and safety, and then removed such content, they would be liable for removing the content in a way that was not consistent with their guidelines. If, on the other hand, they agree to remove materials which are racist, anti-semetic, or similarly offensive, and they failed to remove a posting (or moderate a forum, or deplatform a person), then the platform would be deemed to be a “publisher” of the racist, sexist, or offending content, and would be liable for the civil and criminal consequences. You know, damned if you do, and damned if you don’t. Oh, and even if the platform act in good faith in its removal/non-removal decision, the government can still go after the platform directly as a publisher since the DOJ guidance would remove the immunity afforded by Section 230 of the CDA in any “civil enforcement actions brought by the federal government.” Thus, for the purposes of civil enforcement actions brought by any agency or department of the federal government, the platform would be considered to be the publisher of the contents posted by any third party or subscriber.
Isn’t that special?
Finally, it is noteworthy that the announcement comes from the antitrust division of the Department of Justice — and constitutes and implicit threat by the DOJ to investigate platforms like Facebook, Google, Twitter, and others if they don’t hew to the government line — which is currently that these platforms in some way are “unfair” to conservative, right wing, Q-Anon, neo-Nazi and similar voices, in violation of the Clayton and Sherman acts. The DOJ press release notes that ‘the avenues for engaging in both online commerce and speech have concentrated in the hands of a few key players. It makes little sense to enable large online platforms (particularly dominant ones) to invoke Section 230 immunity in antitrust cases, where liability is based on harm to competition, not on third-party speech.” This is actually intended to get at the idea that big platforms (think Facebook and Twitter) engage in “anti competitive” actions when they “silence” not the voices of their competitors, but of conservatives, and that they should be liable for engaging in the kind of blocking that the government will now actually require them to do.
All told, the DOJ release is a mess. The good news is, for now, it’s a mess without any force of law.
Mark Rasch is an attorney and author of computer security, Internet law, and electronic privacy-related articles. He created the Computer Crime Unit at the United States Department of Justice, where he led efforts aimed at investigating and prosecuting cyber, high-technology, and white-collar crime.