Deepfakes and Beyond: Who Wins if Social Media Platforms are Regulated?

CES

Wouldn’t it be great if everyone except you and people you agree with would just STFU? 

Attributed indirectly to Voltaire, the sentiment “I disapprove of what you say, but I will defend to the death your right to say it” has in significant respects yielded to atmosphere of toxic interactions and intolerance in the online arena. 

Intolerance and the silencing others seems to be the desire of many when it comes to social media and purported harmful speech and misinformation. There are endless eruptions over bias and hate speech online, from alleged slanted search results on Google to Twitter abuse.

Some important steps are being taken by private firms to deal with such concerns, up to and including Facebook’s newly announced policy to root out “deepfakes” on its platform. Deepfakes are perhaps the most extreme version of harmful content depending on context (some versions may be “mere” satire); but even Facebook’s policy here is being criticized as not doing enough to cover everything under the sun that might be deemed misinformation.

Some activists and policymakers want the rules be changed so that social media platforms rather than the speaker gets held responsible damaging content. That debate is being revisted again at the 2020 Consumer Electronics Show (#CES2020), hosted annually by the Consumer Technology Association.

As tediously noted by virtually everyone who has ever defended the principle of free speech, expressing popular, domesticated or politically correct thoughts never needs protection.

It is, rather, the commitment to protecting dissident expression that distinguishes a (classically) liberal society enshrining individual rights from repressive ones.

At the same time, though, no one possesses a right to force others to tolerate and transmit one’s favored ideas.

Infidelity to those two mirror-image principles is a sin across today’s political spectrum, however.

The social media debate is particularly acute. Should we ban “harmful” content as some on the left want? Or should policymakers compel “political neutrality” as the right demands?

Better be careful; the winners in either campaign are going to be government regulators, and, curiously enough, the biggest of the big tech firms.

Conservatives want social media titans regulated to remain neutral, while some liberals tend to want them to eradicate harmful content and address other alleged societal and electoral ills. Related examples of what many wish to regulate or certify include bias, harmful content, bots, advertising practices, privacy standards, election “meddling” and more.  

Positions on the carriage of lawful but controversial content over communications media can be fluid and contradictory as the recent “net neutrality” debate attests, but what goes around comes around when it comes to tech regulation. Google and Facebook persisted among those holding that Internet service should be treated as a public utility, and now find that charge hurled (improperly) at them. 

Perceiving adversarial and monopolistic shapes in the clouds, policymakers, pressure groups—and ominously even certain technology sector leaders whose enterprises benefited from and owe their heft to free expression—are pursuing impassioned online content and speech standards along with other moves clamping down on peer Internet firms.

Even president Trump is reportedly preparing an executive action addressing alleged censorship and bias by (allegedly) “monopolistic” social media. This impulse goes beyond the social media front; In March 2019, for example, the president felt inclined to issue an executive order directing that colleges receiving federal research or education grants promote free inquiry. 

Humorously, even Trump seemed perplexed by some of the online antics of those he sought in to defend: “The crap you think of is unbelievable,” he told them, conferring damning praise on certain right-of-center social media outliers at the July 11, 2019 White House Social Media Summit 

As all-encompassing as today’s social media platforms seem, policymakers, should recognize that they represent a snapshot in time. They are prominent parts of a nonetheless limitless and boundless Internet and media landscape that cannot be monopolized (unless government imposes the censorship).

Besides, either blocking or compelling speech in reaction to governmental pressure would not only violate the Constitution’s First Amendment, these would require an immense expansion of already unconstitutional administrative agencies, a double offense. 

A triple offense would emerge should government-affirmed social media and service provider deplatforming lead to real world, offline deprivation of liberty or property (access to banking services is one concern people have) without due process in violation of the Fifth Amendment.

The Internet represents a transformative leap not because of top-down rules and codes governing acceptable expression, but because it uniquely expanded broadcast freedom to mankind. More is published in a day than be produced otherwise in months or years.

Much of that growth is attributed to the immunities from liability for user-generated content afforded to social media platforms (and, too often forgotten, other Internet-enabled services such as discussion boards, review and auction sites, and commentary sections) by Section 230 of the 1996 Communications Decency Act. (47 U.S.C. §230).

Host takedown of or retention of undesirable or controversial content by “interactive computer services” can be contentious, biased, mistaken or stupid, or perhaps uniformly cheered like the removal of deepfakes may be.

But Section 230 does not require neutrality in user-generated content treatment in exchange for immunity; in fact the law explicitly protects non-neutrality, albeit exercised in “good faith.” The law reads that providers will not be held liable for: 

…any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.  

Translated, that means you can say it, but no one is obligated to help you do so. Kinda like those mirror-image principles noted above.

Jeff Kosseff of the U.S. Naval Academy, and author of The Twenty-Six Words that Created the Internet (he’ll speak at #CES2020 on “The Future of Sec. 230”) maintains that without Section 230 “the two most likely options for a risk-averse platform would be either to refrain from proactive moderation entirely, and only take down content upon receiving a complaint, or not to allow any user-generated content.”

Similarly, Internet Association President Michael Beckerman (who will also speak at CES on “The Global Race for Leadership in AI,” a related issue with its own struggles over bias and other accusations, and its own accompanying misguided regulatory pursuits) argues that “Eliminating the ability of platforms to moderate content would mean a world full of 4chans and highly curated outlets—but nothing in between.”

None are required to be fair or neutral, nor can government require it. That’s healthy, though.

While Section 230’s mid-1990s “broad liability shield for online content distributors” was not inevitable, it represented less a detour from common law than “acceleration” of a decades-long trend in courts narrowing liability for publishers, republishers, and distributors more generally (as described by Brent Skorup and Jennifer Huddleston) and a concept of “conduit immunity” imparted to intermediaries.

It is the case that narrow changes have been made to Section 230 such as with respect to sex trafficking. But what is in the air today is deeper, riskier change for both competing and overlapping trans-partisan reasons. It is probable, or would at least be unsurprising, that some content removals may happen in bad faith, or that companies violate their own terms of service. But addressing those on a case by case basis would be a more fruitful reckoning approach.

Sec 230 notwithstanding, misrepresentation or deceptive business practices open companies up to legal discipline already. One conservative grumbled in the Washington Post that “YouTube … advertises itself as an open platform ‘committed to fostering a community where everyone’s voice can be heard.’ Facebook and Twitter make similar claims.” It is debatable that they objectively make such claims, but some officials have urged the more targeted approach of “using laws against deceptive business practices to charge social-media platforms with making false statements about neutrality.” 

Regime-changing regulation of dominant tech firms in the name of tamping harmful speech or misinformation is unlikely to discipline them in the manner the public is being led to expect, but to make them stronger and more impervious to displacement. This is the case whether the issue is imposing online sales taxesprivacy mandates, or new speech codes.

Facebook CEO Mark Zuckerberg, for example, acknowledged to Congress in 2018 that privacy regulation would benefit Facebook, a sentiment reiterated by Facebook policy head Richard Allan’s favorability toward a “regulatory framework” addressing disinformation and fake news. Such surrenders are illusory; they represent strategically more favorable alternatives from a given corporation’s perspective to the threatened corporate breakups, mega-fines, or even personal liability for management on the global stage.

Given the proliferation of media competition across platforms and infrastructure, none are nor can be essential facilities as some accuse (monopolization and breakup are also topics at #CES2020). Regulation, though, will backfire and turn them into monopoly essential facilities, and render them even less subject to competitive restraint.

This is an important point in tech sector business history. While LinkedIn and Reddit leaders have cautioned against social media regulation, Twitter and Snapchat CEOs, Y Combinator president Sam Altman, and Instagram’s former CEO have supported regulating elements of big tech, social or otherwise. So has Apple head Tim Cook, vigorously so. None has been as prominent and unambiguous in support for top-down regulation as Facebook’s Zuckerberg, who in late March 2019 wrote explosively that “The Internet Needs New Rules.” That document called for governmental affirmation of the exclusion of certain kinds of “harmful” speech, (while leaving some alone, such as advocacy of socialism; more on body count later), as well as detailed official reporting obligations for tech firms paralleling financial reports companies must file with the Securities and Exchange Commission.

Such regulatory schemes bear overlooked significance for the Sec. 230 status quo, with which they not compatible. They would displace it for big tech’s potential competition, and in so doing protect big tech incumbents.

Conservatives fixated on social media bias are reluctant to appreciate the immeasurable benefit they receive from Section 230. It was never a subsidy to anyone; it applied equally to all (publishers like newspapers get to have websites too). Even if biases on the part of some platforms are deemed valid (in an elemental sense, bias should not be denied and big tech needs to defend it), there is no precedent for the reach conservatives enjoy now. Those who complain of bias on YouTube, for example, pay nothing for the hosting that can reach millions, and stand to profit instead. Some do get “deplatformed,” of course; but if improperly so, that may be a violation of terms by the host resolvable in ways other than altering Sec. 230 with a sledgehammer.

Most ominously, conservatives fail to appreciate the dangers of common cause with seekers of content or operational regulation on the left. Given the ideologically liberal near-monoculture at publicly funded universities, major newspapers and networks, the vast pre-existing administrative apparatus in the federal government—and in management and corporate culture for most of Silicon Valley, for that matter—it is the left that has more to lose from not regulating the Internet as the sole cultural medium not largely dominated by liberal perspectives.

Nothing is more fully open to independent or dissenting voices than then Internet; regulation will change that. For this reason alone, tomorrow’s conservatives will rely upon Section 230 immunities more than ever to preserve their voices.

Any deals that conservatives broker to expand federal agency oversight of Internet content standards in the name of “impartiality” stand ultimately to be repurposed to aims of the left in the longer run. While the right wants to change 230, the left does too, but to regulate speech in its own preferred ways. Sen. Mark Warner (D-Virginia), for example, with whom Republicans are ill-advisedly working on tech legislation involving Internet privacy, advertising, and other matters, has introduced (with Nebraska Republican Sen. Deb Fischer) the “Deceptive Experiences To Online Users Reduction (DETOUR) Act” to address “dark patterns” and the alleged tricking of users into handing over data.

Warner seeks even broader regulation of tech firms, including changes to Section 230; but he probably could not care less about hypothetical ant-conservative bias online. Warner, who proposes holding the “Wild West” tech sector accountable for “disinformation” and also flirts with limiting anonymous speech tools, will not agree with conservatives on what constitutes disinformation or valid exercise of the civil right of anonymous communication

At bottom, neither left nor right defends vigorously, as is their duty, the right to platform bias. Both sides must hit pause, and protect the First Amendment and the right to expression of differing, controversial viewpoints as well as affirm the property rights of “interactive computer services” of today and tomorrow. 

The opponents, only superficially in conflict, agree in principle that government should have final say over some aspects of platform content and speech. Entrenched Washington regulatory agencies, in the wake of any bipartisan tech regulation “victory,” will not see objectivity the same as conservatives or classical liberals.  

So? Ban or compel content? The administrative state and incumbents win either way. The brouhaha over social media power and bias boils down to one thing as far as protecting classical liberal or constitutional values is concerned.

The capital-I Internet—and any future communications platforms or splinternets of whatever incarnation—need protection from both the banned speech sought by the left, and from the forced conservative ride-along speech sought by the right.

Here’s a thought. Embrace your bias; but let others embrace theirs.

Originally published at Forbes.