PRO-SPEECH Act Seeks to Reintroduce Problem Solved 25 Years Ago

Photo Credit: Getty

For those generally skeptical of Congress’ ability to make good policy, the existence of Section 230 in the 1996 Communications Decency Act evokes a sense of wonder. The bipartisan and prescient law is known as “the twenty-six words that created the Internet.” Lest one begin to doubt their cynicism about Congress, there is now legislation in the Senate that would reintroduce the very problem Section 230 solved 25 years ago.

In 1996, Section 230 solved a problem that had been caused by a pair of court decisions on content moderation and liability. This was called “the moderator’s dilemma.” If a company curated its platform to weed out unseemly third-party content, it would face increased liability risks of litigation for material it left up, while a “hands-off” approach that would limit the platform’s liability to moderate third-party content would result in a platform having to leave up violence, pornography and spam.

But now the Promoting Rights and Online Speech Protections to Ensure Every Consumer is Heard (PRO-SPEECH) Act, sponsored by Sen. Roger F. Wicker’s (R-MS),  seeks to reintroduce the moderator’s dilemma by forcing platforms to choose to be either dumb husks—hosting pornography, violence and spam—or assume the liability risks of a traditional publisher if they remove any third-party content from their own platform.

Section 230 acts as a shield to spare platforms from excess legal liability even if they take down posts that don’t adhere to their terms of service. The “procedural fast lane” allows for online customer reviews, comments sections on websites, and the proliferation of social media in general. It empowers platforms to differentiate themselves by allowing—or not allowing—certain types of third-party content. Section 230 doesn’t mean that a restaurant can’t sue a poster of a bad review if it is false, but it does mean that that restaurant can’t sue the platform that hosts the review for leaving it up. That means that instead of online spaces being a free-for-all, they can be transformed into virtual places safe for children, safe for critical reviews, and free from unwanted content.

Certainly online content moderation is more of an art than a science. Those on both sides of the political aisle have been understandably frustrated with specific decisions by tech platforms to take down or leave up content. But in terms of allowing for a flourishing ecosystem of third-party speech, Congress nailed it with Section 230.

It held off both the plaintiff’s bar and the morality police to let companies decide to make their own content moderation policies and let consumers vote with their clicks. Under 230’s shield, online speech bloomed, billions of dollars of value were created for investors, and a lot of eye balls were spared a lot of unpleasant (but legal) content online.

For all the same reasons Section 230’s protections were a good idea 25 years ago, this bill is a bad idea today. Increasing liability risks to platforms will lead them to take down more content, not less. And preventing them from taking down any content will result in platforms most people would not want to visit. Consumers will be left with either disinfected and bland platforms or platforms full of pornography, violence, and spam and not much useful in the middle.