Regulating Social Media Content Moderation Will Backfire And Make Big Tech More Powerful
As repeatedly noted by defenders of free speech, expressing popular opinions never needs protection. Rather, it is the commitment to protecting dissident expression that is the mark of an open society.
On the other hand, no one has the right to force people to agree with one’s ideas, much less transmit them. However the flouting of these principles is now commonplace across the political spectrum.
Concern over social media content and expression has escalated among politicians, pundits and activists of both left and right. In March 2019, for example, President Trump issued an executive order directing that colleges receiving federal research or education grants promote free inquiry. And in May 2020 he issued an Executive Order on Preventing Online Censorship addressing alleged bias by allegedly monopolistic social media companies. Conversely, the left is pushing boycotts against social media, Facebook in particular, for condoning what it deems harmful speech (not for what others would deem to be such).
In this political environment, policy makers, pressure groups, and even some technology sector leaders—whose enterprises have benefited greatly from free expression—are noodling the idea of online content and speech standards, along with other policies that, if compulsory, would seriously burden their emerging competitors.
The current social media debate centers around competing interventionist agendas. Conservatives want social media titans regulated to remain “neutral,” while liberals tend to want them to eradicate harmful content and address other alleged societal ills. Looming in the background, some maintain that big tech Internet services should be regulated as a public utility.
Blocking or compelling speech in reaction to governmental pressure would not only violate the Constitution’s First Amendment—it would require immense expansion of constitutionally dubious administrative agencies. These agencies would either enforce government-affirmed social media and service provider deplatforming—the denial to certain speakers of the means to communicate their ideas to the public—or coerce platforms into carrying any message by actively policing that practice.
When it comes to protecting free speech, the brouhaha over social media power and bias boils down to one thing: The Internet—and any future communications platforms—needs protection from both the bans on speech sought by the left and the forced conservative ride-along speech sought by the right.
In the social media online speech debate, the problem is not that big tech’s power is unchecked and its content moderation inherently biased. Rather, the problem is that social media regulation—by either the left or right—would make it that way.
Like the banks, social media giants are not too big to fail, but close regulation would make them so.
American values strongly favor a marketplace of ideas where debate and civil controversy can thrive. Therefore, the creation of new regulatory oversight bodies and filing requirements to exile politically disfavored opinions on the one hand, and efforts to force the inclusion of conservative content on the other, should both be rejected.
Much of the Internet’s spectacular growth can be attributed to the immunity from liability for user-generated content afforded to social media platforms—and other Internet-enabled services such as discussion boards, review and auction sites, and commentary sections—by Section 230 of the 1996 Communications Decency Act.
Host takedown or retention of undesirable or controversial content by “interactive computer services,” in the Act’s words, can be contentious, biased, or mistaken. But Section 230 does not require neutrality in the treatment of user-generated content in exchange for immunity. In fact, Sec. 230 explicitly protects non-neutrality, albeit exercised in “good faith.”
Section 230’s broad liability protection represented an acceleration of a decades-long trend in courts narrowing liability for publishers, republishers, and distributors, as Jennifer Huddleston and Brent Skorup have pointed out.
It is the case that changes have been made to Section 230, such as with respect to sex trafficking, but deeper, riskier change is in the air today, advocated for by both Republicans and Democrats.
It is possible that some content removals may happen in bad faith, or that companies violate their own terms of service, but addressing those on a case-by-case basis would be a more fruitful approach.
Section 230 notwithstanding, laws addressing misrepresentation or deceptive business practices already impose legal discipline on companies. Regime-changing regulation of dominant tech firms—whether via imposing online sales taxes, privacy mandates, or speech codes—is likely not to discipline them, but to make them stronger and more impervious to displacement by emerging competitors.
The vast energy expended on accusing purveyors of information, either on mainstream or social media, of bias or of inadequate removal of harmful content should be redirected toward the development of tools that empower users to better customize the content they choose to access.
Existing social media firms want rules they can live with—which can too easily translate into rules that future social networks cannot live with. Government cannot create new competitors, but it can easily prevent their emergence by imposing barriers to market entry.
At risk in the regulatory fervor, too, is the right of political—as opposed to commercial—anonymity online. Government has a duty to protect dissent, not regulate it, but a casualty of too-tight regulation would appear to be future dissident platforms.
The Section 230 special immunity must remain intact for others beyond today’s slate of big tech players, lest Congress turn social media’s economic power into genuine coercive political power.
Competing biases are preferable to pretended objectivity. They are in fact necessary. Given that reality, Congress should acknowledge the inevitable presence of bias, protect competition in speech, and defend the conditions that would allow future platforms and protocols to emerge in service of the public.
The priority is not that Facebook or Google or any other platform should remain politically neutral, but that citizens remain free to choose alternatives that might emerge and grow with the same Section 230 exemptions from which the modern online giants have long benefited.
We are at stage where too many are prepared to regulate online speech in ways they believe will suit them, the typical “do something” disease that afflicts Washington. Policy makers must avoid creating an environment in which Internet giants benefit from protective regulation that prevents the emergence of new competitors in the decentralized infrastructure of the marketplace of ideas.
(This column is in part based on “The Case against Social Media Content Regulation: Reaffirming Congress’ Duty to Protect Online Bias, “Harmful Content,” and Dissident Speech from the Administrative State,” available on SSRN.)