Glaring Problems with Latest Right-Wing Attack on Section 230
A recent opinion editorial in Newsweek is the latest salvo from the political right against Section 230 of the Communications Decency Act, which grants liability protection to online platforms for speech provided by users. Couched in criticisms of tech platforms’ responses to the COVID-19 crisis, the article succinctly lays out most of the main arguments emanating from the right against Section 230. That makes it an ideal target for rebuttal.
For those unfamiliar, here’s a quick primer on Section 230.
Section 230 is the law essentially responsible for the Internet as we know it today. It largely protects online services from liability over what third-party users post. For example, if a person posts a libelous Facebook status about you, Facebook is not legally responsible for the content, only the person who posted it. Without it, Internet services from online marketplaces to social media to review sites would either be unmoderated cesspools or simply wouldn’t exist as largely free and open to users.
Prior to Section 230, any attempts to moderate content meant platforms could be held liable for all user-generated content. Section 230 addresses the impracticality and excessive risk of assuming liability for all the countless pieces of content uploaded by users by granting online services the ability to moderate content as they see fit.
Back to the Newsweek piece.
The first major problem with the article is this claim:
In April, Facebook began removing content promoting anti-lockdown events. From ABC News to Politico, it was reported that this was being done at the behest of state governments. By the evening, Facebook clarified that it would only be removing from its platform content pertaining to groups whose activities violated governments’ social distancing guidance.
In other words, Facebook is not removing protest content that is unlawful—but rather, content that goes against state government “advisories.” That is, guidance without the force of law.
The claim here conveniently leaves out half of what Facebook actually said. Here’s what a Facebook spokeswoman told The Wall Street Journal:
“Unless government prohibits the event during this time, we allow it to be organized on Facebook,” the spokeswoman said in a statement. “For this same reason, events that defy government’s guidance on social distancing aren’t allowed on Facebook.”
Protests that do not violate social distancing guidelines to protect public health are therefore not impacted under Facebook’s policy. In addition, Facebook’s preexisting community standards policy clearly states that harmful activity, even if it is lawful, will be moderated on the site:
In an effort to prevent and disrupt offline harm and copycat behavior, we prohibit people from facilitating, organizing, promoting, or admitting to certain criminal or harmful activities targeted at people, businesses, property or animals. [Emphasis added]
There are plenty of completely lawful activities that otherwise cause harm which Facebook has previously moderated. For example, Facebook’s guidelines state that content such as “Encouraging participation in a high risk viral challenge” will be taken down. One such challenge that went around the Internet was the infamous “Tide Pod” challenge. To my knowledge, there is nothing illegal about ingesting a capsule of laundry detergent, but it is quite obviously a dangerously stupid idea. Government officials even issued guidance warning against it.
Were there any serious objections to Facebook and other tech services moderating such content? Is it really the role of social media companies to pick and choose which public health advisories are relevant? If Facebook selectively enforced its rules and ignored public health guidance, the reaction would be predictable.
Government officials and commentators across the political spectrum would condemn the company for encouraging dangerous behavior and target Section 230 as a result. Tech companies already face serious threats to Section 230 from the left for controversial content they’ve chosen to allow. On the right, it is easy to imagine what the author of the Newsweek piece and her sympathizers would say if websites did not moderate content in a manner consistent with other public health guidance and government safety initiatives. If all companies suddenly allowed all lawful content on their websites, from pornography and violence to things like the Tide Pod challenge, there would be complaints that not enough was being done to protect children and provide safe spaces for families online.
For example, Section 230 is currently under threat from a bipartisan bill called the Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act. Republican Senators Lindsey Graham (SC) and Josh Hawley (MO) are its lead sponsors. Senator Hawley in particular is an outspoken critic of big tech and Section 230. The EARN IT Act makes Section 230 protections contingent on yet-to-be-determined regulations regarding online child sexual abuse material (CSAM). Yet, as Professor Eric Goldman of Santa Clara University School of Law and Section 230 expert explains:
Section 230 does not apply to federal criminal prosecutions, so Internet services have never had Section 230 immunity for federal CSAM crimes. … Internet services must report known CSAM items to [the National Center for Missing and Exploited Children] (18 U.S.C. § 2258A).
If Section 230 is being threatened, from both the right and left, over mandated content moderation and reporting, it is not difficult to envision the response if companies gave up voluntary moderation of other harmful or sexual content. What’s more, if companies engaged in selective moderation of content deemed harmful by health, safety, and other public officials, it is without question that the companies would be accused of not uniformly enforcing their own policies.
In short, Facebook and other services face a no-win situation and thus far have chosen a path where they presumably faced the least amount of partisan pushback and accusations of hypocrisy. They shouldn’t be punished for mitigating risk and staying consistent with preexisting internal policies.
The next glaring problem with the Newsweek column is this passage:
But with the help of the courts, that targeted immunity privilege has been stretched to include, among other things, the enforcement of government-determined narratives against speech that would otherwise be constitutionally protected if it occurred outside these platforms and away from Sec. 230.
What the author tries to insinuate here to the average reader is that speech made on Facebook, YouTube, or any other online services would be protected but for courts’ various applications of Section 230. In fact, the author clearly knows better, as indicated by the part where she says “would otherwise be constitutionally protected if it occurred outside these platforms[.]”
The fact is that speech on these services is not constitutionally protected from the services’ moderation decisions.
In the same way you are not entitled to hold a political rally in your neighbor’s yard without her permission, the First Amendment does not entitle you to the property or resources of other private individuals or entities to facilitate your speech, including a website. The First Amendment is purely a restriction on government action to restrict or compel speech and is decidedly not an entitlement at the expense of others’ speech, association, and property rights.
A recent unanimous decision by the Ninth Circuit Court of Appeals, with Republican appointees comprising two of the three judges, affirmed this exact point in regards to YouTube’s content moderation practices. The court held the following:
Addressing the First Amendment claims, the panel held that despite YouTube’s ubiquity and its role as a public facing platform, it remains a private forum, not a public forum subject to judicial scrutiny under the First Amendment. The panel noted that just last year, the Supreme Court held that “merely hosting speech by others is not a traditional, exclusive public function and does not alone transform private entities into state actors subject to First Amendment constraints.” Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921, 1930 (2019). The panel held that the Internet does not alter this state action requirement of the First Amendment. The panel therefore rejected plaintiff’s assertion that YouTube is a state actor because it performs a public function.
The counterargument here would be that because YouTube and other services are moderating based on government guidance they are therefore state actors. Yet, by this logic, merely agreeing with the government would mean forfeiting your First Amendment rights. Does any business that takes advice on best practices from a government source, such as workplace safety precautions, forfeit its rights? Certainly, this is not the case.
On to the next problematic point:
“For a private company to simply delete the promotion of protests it deems unacceptable is a remarkable expansion of its power over what was once a sacrosanct and constitutionally protected freedom,” wrote one commentator recently. “Through these private companies … government officials can in effect restrict speech they are obligated to protect.”
The issue with this argument, other than that merely agreeing with the government does not constitute state action, is that it neglects to acknowledge the two sides of free speech. Free speech under the First Amendment limits government’s ability to both restrict and compel speech.
Again, private entities, from your neighbor’s yard to online services, are under no obligation to host the speech of others. So, just as much as government actually forcing private companies to moderate certain speech may constitute state action in violation of the First Amendment, forcing companies to host certain speech carries its own constitutional problems.
This brings us to the next problem with the article:
These are private corporations, so they can exercise their free speech rights while trampling on yours.
At least here the author acknowledges private corporations have free speech rights, but she incorrectly frames speech as a zero-sum game absent government intervention. Again, the First Amendment is a restriction on government action, not some pot of speech rights and resources that must be equally doled out as an entitlement. A private individual or entity enjoying constitutional protections does not inherently limit your own.
There is no finite amount of the government not doing something.
The rebuttal to this point would undoubtedly be some form of the next problem with the column:
These private corporations only exist because of special government protections—protections that, given the growing power of Big Tech, need greater oversight and perhaps revision.
The framing of Section 230 as a “special government protection” is beyond misleading.
The idea of limiting third-party liability in general is not a concept foreign to American law, particularly from a conservative perspective. As the U.S. Chamber of Commerce’s Institute for Legal Reform documents, there have been several state and federal reforms to tort law, including limitations on the ability to seek and recoup damages from third parties, that have varied in size and scope. Limiting the ability to file frivolous lawsuits is not a unique or anti-conservative concept.
Section 230 is not unique when it comes to the specific issues of speech it deals with either. Brent Skorup and Jennifer Huddleston of the Mercatus Center conclude the following in their thorough examination of decades of case law:
The Section 230 reform movement is growing, and many of the reform arguments complain that online intermediaries receive a special dispensation regarding publisher liability. The truth is more complicated. Starting in 1931 and for six subsequent decades, courts gradually chipped away the regime of strict liability for publishers and content distributors owing to the practical difficulties of screening all tortious content and to the potential for restricting First Amendment rights. Those courts found that mass media distributors warranted extensive liability protections, including an important protection for conduit liability.
Finally, Section 230 is mischaracterized as a special privilege that only a few large tech companies enjoy. Even if the kind of liability protection found in Section 230 was unique to Internet services, which it is not as noted above, the full scope of the kinds of Internet services and users protected by Section 230 must be appreciated. Per the Electronic Freedom Foundation:
Section 230, by its language, provides immunity to any “provider or user of an interactive computer service” when that “provider or user” republishes content created by someone or something else, protecting both decisions to moderate it and those to transmit it without moderation. “User,” in particular, has been interpreted broadly to apply “simply to anyone using an interactive computer service.” This includes anyone who maintains a website, posts to message boards or newsgroups, or anyone who forwards email. A user can be an individual, a nonprofit organization, a university, a small brick-and-mortar business, or, yes, a “tech company.”
Section 230 protects you as a user as much as it protects big tech services. Do we want to live in a world where multibillion-dollar tech companies swat away lawsuits while individual users and small businesses are crushed for accidentally sharing libelous or otherwise harmful content?
As the author acknowledges, the tech giants of today were started in dorms and garages. This was possible because Section 230 provided a predictable and reliable standard of protection against frivolous lawsuits.
It also helps startups preserve capital for building out the core functions of their services without immediately needing to invest the significant resources necessary to engage in any kind of content moderation at scale. These same opportunities should be afforded to the services that we will all come to rely on someday, which are right now being coded in a dorm or garage somewhere.
The Newsweek piece argues, “[T]he antidote to bad or misleading speech is actually more speech[.]” On this we agree, but Section 230’s protections for big and small services and users alike is what actually facilitates more speech. To eliminate or amend the law at this time would undoubtedly only cement the dominance of the incumbents the author finds problematic, effectively pulling up the ladder from competing services that offer different terms of service, including moderation.
The author proceeds to double-down on the mistaken claim that Section 230 is a special privilege and couples it with a patently absurd comparison:
Sec. 230—the sweetheart deal that allows tech to censor content without the same consequences as, say, the newspaper industry—is, at its root, a congressionally authorized tech industry subsidy.
The comparison to newspapers or any other form of traditional media strains credulity. At the core of Section 230 is the moderator’s dilemma. As mentioned at the beginning, online platforms deal with billions of uploads and other posts daily. Prior to Section 230, any attempts to moderate content meant platforms could be held liable for all user-generated content. To perfectly moderate such content is impossible, but the law prior to Section 230 made perfect moderation the enemy of the good and penalized companies for any little thing that may have slipped through the cracks.
Newspapers and other traditional media do not deal with such a problem to any relevant extent, other in their online comments sections—where they are protected by Section 230. You cannot simply publish whatever you want in a newspaper. You cannot simply demand and receive airtime on a radio or television show. There are editors and producers who control the process before the content reaches an audience. Thus, to compare newspapers and other media, where content is prescreened, to open-access platforms like Facebook and YouTube is ridiculous.
Further, if limiting the ability of people to post whatever they like on a website counts as “censorship,” then by definition newspapers and broadcasters have been engaging in censorship since the invention of the printing press. That is nonsense, of course, in-part because censorship is an act of government and shouldn’t be conflated with the editorial discretion of private entities.
The fact that not everyone can simply publish whatever they want in a newspaper or appear whenever they want on a radio or television show undercuts another curious argument. Toward the end of her column, the author says of Section 230:
It is a privilege that, like any other industry subsidy, should be reviewed and assessed not merely for its practical relevance and effectiveness for an industry that continues to accumulate unprecedented power, but also for the future of bedrock values like free speech in America.
The idea that social media companies have accumulated “unprecedented” power flies in the face of not just the facts but a long record of mostly legitimate complaints by conservatives regarding bias in the traditional forms of media.
Prior to the dawn of the online services we know today, enabled largely by Section 230, the flow of news was tightly controlled by just a handful of national news, broadcast companies, with perhaps one or two stations and newspapers per locality scattered in between. Today, because of online services and Section 230, it is hard to concisely describe the enormous amount of information available at one’s fingertips.
Anyone with a smartphone is now a reporter and cameraman. People can access literally countless primary sources they trust without the filter of professional journalists and editors.
This has been a particular boon for conservative groups and advocates who can now directly reach millions of followers without begging an editorial board for permission. The author of the Newsweek piece, for example, has over 22,000 Twitter followers. In what pre-Section 230 world did that kind of power exist?
President Trump himself is on the record saying, “I doubt I would be here if it weren’t for social media, to be honest with you.” He’s right. Trump ran for president in 2000 and ended his campaign in February 2000. Four years later to the month, Facebook was founded.
The fact of the matter is that Section 230 has broken the information bottleneck, not tightened it—which brings us to the final egregious oversight of the author’s case against the law:
As these platforms become more and more brazen in their removal of speech, often at the behest of overzealous politicians, it is clear that Congress must revisit Sec. 230.
Does the author not realize who sits in Congress? Reopening Section 230 carries just as much risk of allowing politicians to directly constrain online speech as expanding it for the average user.
Even in the latter case, proposals to curb Section 230 in order to expand the kind of content online platforms are forced to host inherently limits overall speech, association, and property rights protections. Speech isn’t a zero-sum game until the government gets involved and decides whose speech is more important. This shouldn’t be Animal Farm, where all speech is free, but some speech is freer than other.
If conservatives are truly concerned about government leaning on online platforms and inducing censorship, they’d be wise to stop stretching the truth in order to summon the fox into the henhouse.