If Businesses are Liable for Internet Posts, Consumers Lose
Even amid Washington gridlock, one thing almost everyone seems to agree on is that the way companies moderate online content that makes people mad, and that the law shielding companies from liability over user-posted content must change. The question everyone should ask is: What would that change mean for consumers across the country?
Section 230 of the 1996 Communications Decency Act was passed to clarify the question of who should be responsible for what people post online. The author of the post is legally responsible, not the platform that hosts the post. Section 230 empowers platforms like Facebook and Twitter to make their own rules about what third-party content is allowed and lets them keep their legal protection even if they hide or remove posts. The law makes it safe for platforms to host pictures, comment sections, and customer reviews without being hauled into court every time someone objects. If you leave a bad review, VRBO or Yelp doesn’t have to worry about being sued if a spurned business owner claims the review is false, for example.
Section 230 has allowed consumers to benefit from an increased connection with others online, more speech online, and better information about products and services.
But those benefits are at risk as lawmakers in both political parties seek to repeal or curtail Section 230. Today, some on the left complain Section 230 makes it too easy for platforms to leave up “dangerous” misinformation, and they want platforms to remove more of that third-party content. Meanwhile, many on the right think that platforms discriminate against conservative content and want platforms to remove less user-generated content.
Where are consumers in this political tug-of-war? Repealing Section 230 would harm internet consumers by curtailing the availability of useful services like social media posts, comment sections, and customer reviews and by allowing more unwanted material online.
If the left succeeds in repealing or curtailing Section 230, more speech will be removed by platforms online. Platforms will err on the side of caution when threatened with increased legal costs over third-party content. Why risk the expense of being dragged into court over a bad review someone on the Internet posted, true or not, when it would be so much cheaper to just take it down? Also likely to be eliminated in a world without Section 230 are comments sections on various websites and a larger number of third-party social media posts. Consumers will lose these valuable services.
Critics on the right have begun to realize that while Section 230 saves platforms from having to litigate every potential liability case, it’s actually First Amendment protections against being forced to carry speech that allow platforms to remove content. Pivoting in their approach, there are now calls to regulate platforms like common carriers (akin to public airlines, railroads, bus lines, and phone companies), which would unfairly diminish First Amendment protections that all Americans have — even businesses, large and small.
Another downside to that approach is it brings the practical problem of platforms rendered unable to remove any content and quickly becoming flooded with spam, hate speech, and images of violence and pornography. All of that content is legal, but none of it makes for an online space most Americans would like to visit — or let their children explore. Consumers lose again.
While the frustrations from both sides of the aisle are sincere and worrying for many, a government solution is likely not the best solution. The unintended consequences of such intervention may be a cure worse than the disease. Without the same liability protection that big platforms enjoyed when they were just starting out, today’s nascent upstarts won’t have much chance of gaining a foothold. That lack of innovation and competition won’t serve consumers.
Read the full article at The Orlando Sentinel.