Free to Prosper: Online Speech and Section 230
Section 230 of the Telecommunications Decency Act of 1996 governs liability online. It clarifies that the creator of speech, not its carrier, bears legal responsibility for it. The statute was intended to allow websites to create their own content moderation and curation rules. The coauthor of the statute, former Rep. Chris Cox (R-Calif.), has described Section 230’s approach as “allowing a thousand flowers to bloom” because it replaced the incentives for information service providers— anyone who hosts third-party speech digitally—to take a hands-off approach to curating their forums.
This online liability regime has allowed the Internet to develop as a thriving marketplace of ideas, but now it faces criticism from both sides of the political aisle. Claims from the right of anti-conservative political bias by tech companies have led to calls for declaring social media platforms and other online services as “common carriers” in order to curtail their ability to remove content. From the left, claims of “harmful misinformation” have led to calls to curtail or repeal Section 230. Any of those policies would have harmful, unintended consequences and would likely short-circuit the market response to solving content moderation problems.
Allow for Strong Encryption
The next generation of social media will likely include decentralized platforms. That is a fundamentally different structure from the dominant social media platforms of today. It may hold the key to new entrants displacing current market leaders and moving content moderation decisions away from the centralized control of platform owners and into the hands of users.
Blockchain-based decentralized social media provides end-to-end encryption, so regulatory action that discourages encryption technology could be detrimental to the emergence of the next wave of platforms.
An example of this risk is the Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act (H.R. 6544, S. 3538, 117th Congress). Its noble goal is to fight online child sexual exploitation, but it would inadvertently make it dangerous for platforms to provide encryption to users by making it a trigger for liability and prosecution. By creating legal incentives against offering strong end-to- end encryption to users, lawmakers may short-circuit the evolution and development of market solutions to address current content moderation problems. The more consumers seek new content moderation options, the greater the incentive and reward for entrepreneurs to provide an alternative.
Keep Section 230 in Place
There are several bills aimed at repealing or limiting Section 230, sponsored by members of Congress from both parties. Their goals fall into four broad categories:
- Repeal Section
- Limit the scope of Section 230’s liability
- Impose new obligations on hosts in order to earn Section 230’s liability
- Alter the “Good Samaritan” portion of the statute in order to address political bias complaints.
The approaches and details of these bills differ, but they all carry potential perils. Asking politicians to define what does and does not constitute unacceptable speech is to invite abuse and politicize a fundamental right.
A repeal of Section 230 would cripple the existing legal ecosystem that protects carriers of third-party online speech from liability for that speech. Today’s market leaders may be able to absorb the legal costs of defending against liability claims over some user-generated content, but smaller companies that cannot afford that burden will simply stop hosting third-party speech. That will lock in the big guys’ dominance and reduce the total amount of speech by users online.
The only time Congress limited the scope of Section 230’s liability shield there were unintended and harmful consequences. The House bill, the Fight Online Sex Trafficking Act (FOSTA), and its Senate companion, the Stop Enabling Sex Traffickers Act (SESTA), were well-intentioned attempts to curb sex-trafficking activities online. But in practice, removing the liability shield for hosts if posters were found guilty of posting ads for prostitution led to the wholesale removal of content that did not involve prostitution but posed too great of a legal risk if it was not policed perfectly. The carveout proved to take in too much content and resulted in less total speech being hosted online. In addition, advocates for sex workers claim that the law made conditions less safe because what once could be done online now takes place in person, with physical risk introduced earlier on in the process. Good intentions aside, curtailing Section 230 has likely done more harm than good both online and offline.
Bills that create tests or government oversight for hosts to earn Section 230 are also problematic. The idea of turning unelected bureaucrats at the Federal Trade Commission into the speech police should worry Americans of any political inclination.
The dangers are similar with proposals to alter the “Good Samaritan” portion of Section 230, which provides protection for moderating content in addition to that afforded by the First Amendment.
All of the above bills carry the inherent costs and risks of changing the rules in midstream. All of the proposed changes will disrupt innovation and lead to harmful unintended consequences. Furthermore, even the most carefully crafted surgical strike at Section 230 is unlikely to make it cleanly out of the legislative process. It would be better to leave the status quo, however imperfect, and let the market innovate to address those shortcomings.
Do Not Regulate Social Media Platforms or Other Online Services as Common Carriers
Some on the political right worry about conservative content being removed and have suggested regulating large social media platforms as common carriers—similar to how public utilities are regulated—in order to diminish their First Amendment right to not carry speech they do not wish to host. That is a misguided idea that tramples private property rights and may result in a great deal of lawful, but nonetheless awful, content remaining online.
The practical consequences of thwarting social media platforms’ ability to remove unwanted content is not one that many conservatives will not like. Large platforms constantly remove enormous amounts of spam, pornography, racist, anti-Semitic, and violent third-party content from their sites. To create difficulties or disincentives for them to do so will result in a degraded user experience online, especially for children and other vulnerable populations.
Social media platforms are not like the common carriers of the past. They do not hold themselves out to everyone and do not have explicit terms of service that disqualify some from using their services. Content moderation decisions constitute a form of speech in that they create an online environment that distinguishes one platform from another. That raises the bar for overcoming First Amendment protections compared to regulating trains or delivery services.
Read the full chapter on online speech and Section 230 here.