All Social Media Will Need to Moderate Content
Roughly a year after being booted off the most popular social media networks, former President Trump launched his own digital platform last weekend, Truth Social. The service touts itself as being a politically neutral online space.
As outrage about the content moderation decisions of “Big Tech” grows on the right, the potential for the success of an alternative social media network grows. Having competing platforms with varied policies is a superior market solution to government regulation, so Truth Social’s debut is a welcome development. But to be successful, the upstart must avail itself of the same content moderation privileges that today’s market leaders use on Facebook, Twitter, and YouTube.
Previous attempts to fill the need for a conservative-friendly digital platform have failed in part because they adopted a “content wild west” approach. While talk of not moderating at all may be good marketing for some frustrated users, it doesn’t make much sense once things are up and running. Just ask Gettr. So instead of denying the need for any content moderation, new alternative platforms chould embrace content curation and implement rules with which conservatives are likely to agree.
After all, conservative users too will be displeased to find pornography, violence, or spam on platforms they use. They will want those items removed, hidden, or at least not amplified. To claim otherwise confuses private property rights with First Amendment violations. Established and nascent platforms have a First Amendment right to expunge content they do not wish to carry. Conservatives may sometimes object to the specific content that’s removed, but they should respect the property rights and free speech principles that protect the right to do so.
It seems Truth Social may agree. The company’s Terms of Service clearly state that a user can lose her ability to use the platform or have her content removed or deleted if she, among other things, post content that is, “false, inaccurate, or misleading.” Additional violations include content that is, “spam,” or is, “obscene, lewd, lascivious, filthy, violent, harassing,” or “otherwise objectionable.” (Where have we heard that language before?)
It’s obvious that someone at Truth Social is will have to decide what is “obscene,” and what is art. Similarly, it will be a subjective comment as to what qualifies as “misleading” and what is just an unpopular opinion. It also goes without saying that some people, including the people who have their content removed or are kicked off the service, will disagree with those inherently subjective decisions.
Content moderation at scale will be hard for Truth Social, as it is for all platforms, but perhaps limiting themselves to only political neutrality will prove more manageable than attempts to offer limited moderation across all content. In any case, it’s an experiment worth trying. If it succeeds, it shows that market solutions that preserve maximum economic liberty are possible in social media. If it fails, perhaps the next attempt will learn from those deficiencies and make improvements.
This experimental process isn’t perfect and won’t satisfy everyone or yield quick results, but it’s superior to government regulations that thwart innovation and curtail freedom.