Why Critics Should Leave Section 230’s Liability Protections Alone

Photo Credit: Getty

Attacks on Section 230, the liability limiting shield for online platforms, are coming from President TrumpRepublicans and Democrats on Capitol Hill. But the debate around curtailing or repealing the law is full of misunderstandings about how it functions and the consequences of changing it. To understand Section 230 better, it helps to think of its liability protections encouraging speech online similarly to how incorporating provides liability protections that encourage entrepreneurship. 

The liability limiting benefits of incorporation are widely understood and accepted as beneficial. When entrepreneurs incorporate their business, they protect their personal property from potential litigation and risk. This doesn’t mean there’s no liability for the corporation, but it does eliminate significant and excessive litigation. This limiting of liability tips the scales in favor of continued entrepreneurship and increased commerce.

Section 230 is similar in that it encourages maximum speech online by ensuring that platforms won’t be held liable for what others post on that site. It acts as a filter that keeps platforms from being hauled into court every time someone objects to a post by a third party.

For example, let’s say ‘Every1saCritic’ posts a negative review on Yelp about the food at his local pizza parlor being served cold. But the owner of the pizza place begs to differ; he contacts Yelp and threatens to sue the platform if they don’t take down that review. Obviously, Yelp does not want to spend the time or money to litigate the validity of the coldness claim — nor any like it among their 214 million online reviews as of Q2 2020.

It’s Section 230’s liability protections that keep closed the flood gates of those cases that could cripple bigger platforms and prove lethal for nascent up-and-comers. 

It accomplishes this by placing the liability on the speaker of the post, not the host. Section 230 reads: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” That liability arrangement remains true even if the platform acts to curate third party content by removing or flagging it. In fact, the law was created to incentivize online platforms to be proactive in setting their own standards for content and enforcing those standards free from worry of that moderating would trigger the liability obligations traditionally applied to publishers or distributors.

Section 230 was passed as part of the 1996 Communications Decency Act in response to a earlier New York state court ruling from a year prior, Stratton Oakmont v. ProdigyIt held that a service provider could be held liable for false information its users posted when that provider utilized content moderating tools. (As an aside, the victorious plaintiff in the case, the principals of the Stratton Oakmont brokerage firm who were suing the Prodigy platform for alleged fraud over a third-party post, would indeed be convicted of securities fraud and later achieve infamy in the film “The Wolf of Wall Street.”) Section 230 was passed so that information about issues of vital public interest, such as securities fraud, could flow freely without platforms fearing they’d be held liable for the content of those third-party posts. Meaning, it’s safe for user posts to stay up even if some think that content is not true or offensive. This leads to more speech online, not less, as some Section 230 critics on the right wrongly assert.

Read the full article at Real Clear Policy.