How Regulation of “Harmful Speech” Online Will Do the Real Harm

GettyImages-937363172

Much of the debate over online speech concerns whether or not conservatives are “censored” by big tech. But there are bigger fish to fry.

Even if big tech is not discriminating against conservative or other viewpoints in a statistically significant way today (as I discussed here), certain regulatory policy pursuits underway could do not only that but also much more, inadvertently or not so inadvertently.

It is well known that some politicians, dominant social media firms and activists wish to one way or another expunge what they see as objectionable content online. There are all sorts of categories. They have pointed to hate speech, disinformation, misinformation, as well as harmful or dehumanizing content that they want to root out. 

Since “misinformation” or the otherwise objectionable can translate into “things we disagree with or do not want discussed,” this inventory can be expected to grow once one side or the other sets the terms. 

It is important not to get complacent about this multifaceted campaign and the damage it can do. We may be in a holiday-and-impeachment lull, but this is an issue that will come roaring back with the right trigger.

In the U.S., disagreeable or even hateful speech is constitutionally protected. As one recent statement of principles (cosigned also by my own organization, the Competitive Enterprise Institute) on user-generated content expressed it, “The government shouldn’t require—or coerce—intermediaries to remove constitutionally protected speech that the government cannot prohibit directly. Such demands violate the First Amendment.” 

In my view, the point of the spear this species of potential social media regulation is actually two-pronged. The first component is a March 2019 declaration from Facebook founder and CEO Mark Zuckerberg. The second is a white paper from Sen. Mark Warner (D-Virginia) titled “Potential Policy Proposals for Regulation of Social Media and Technology Firms.”  

In “The Internet Needs New Rules,” Zuckerberg asserted that “I believe we need a more active role for governments and regulators,” and endorsed alliances with governments to police harmful online speech. It was not mentioned, of course, but such moves would erect considerable barriers to future social-media alternatives to Facebook.

Zuckerberg said: 

“Lawmakers often tell me we have too much power over speech, and frankly I agree. I’ve come to believe that we shouldn’t make so many important decisions about speech on our own. So we’re creating an independent body so people can appeal our decisions. We’re also working with governments, including French officials, on ensuring the effectiveness of content review systems. …One idea is for third-party bodies to set standards governing the distribution of harmful content and measure companies against those standards. Regulation could set baselines for what’s prohibited and require companies to build systems for keeping harmful content to a bare minimum.”

So far only Facebook has publicly called for this degree of engagement from governments on speech issues, although other tech firms’ calls for massive intervention — with respect to antitrust, privacy, a universal basic income (the plural of apocalypse as far as the future of limited government is concerned) and artificial intelligence — abound. Their coming around to speech guidelines can be anticipated, too, especially given some recent statements. (Unwelcome reality-check alert: “Democracy” is not at stake if regulators refrain from imposing themselves on big tech; quite the opposite.)

While Google, Facebook, Twitter and tomorrow’s offerings are private media and social media platforms incapable of “censorship” (only government can do that), in the Zuckerberg hybrid formulation the “deplatforming” would be preordained, baked into the ecosystem. Most ominously it would constrain emergent networks to the vision of “baselines for what’s prohibited.”

Would a reasonable person expect, once this path is chosen, that the “harmful” content list would shrink over time, or expand? With regard to overreach, how would one measure muted content or speech, since the undesirable couldn’t be kicked off a network they weren’t allowed on in the first place; nor kicked off an otherwise-viable network that never emerged because of preemptive baselines.

It isn’t the chaos and discombobulation that naturally accompany free speech that “threatens democracy,” as some people smugly assert, but the elitist regulation of that storm. As Federal Communications Commission commissioner Brendan Carr put it, “I think a lot of regimes around the world would welcome the call to shut down political opposition on the grounds that it’s harmful speech,”

Zuckerberg’s proposals also would inflict painful compliance burdens, too; the paper-pushing involved would alone be more than enough to scupper lesser competitors. He states in “New Rules,” for example, that: 

“Facebook already publishes transparency reports on how effectively we’re removing harmful content. I believe every major internet service should do this quarterly, because it’s just as important as financial reporting. Once we understand the prevalence of harmful content, we can see which companies are improving and where we should set the baselines.” 

All this said: Facebook-driven efforts to secure global input on content review decisions not only can be but are unequivocably admirable so long as Facebook is describing what it intended to adopt for itself on its own platform, using its own infrastructure such as its Oversight Board (here’s the charter and more coverage), its Civil Rights Audit (here’s more on it), its Task Force and other resources such as consortia like the Global Internet Forum to Counter Terrorism. 

But the more ambitious campaign seemingly afoot that would transcend self-policing is more radical, arrogant and condescending to the public. Having benefited from longstanding immunities for user-generated content (rooted in Sec. 230 of the Communications Decency Act) that boosted Facebook’s now-global stature (granted, others enjoyed this immunity, too), this now-dominant firm is urging speech standards and reporting burdens imposed even on companies yet-to-be. Such a development would impart permanence to today’s largest social media firms echoing the semi-permanent exclusive franchises given to the AT&Ts of the world. 

It matters not if Zuckerberg refers glowingly to partners as “thoughtful governments that have a robust democratic process.” While free speech in the U.S. is not subject to limitation by majority vote, speech is routinely criminalized throughout the world by these same “robust” democracies; and in the heated modern context, speech is being improperly equated to violence. The Brennan Center for Justice has noted how political pressure from governments has and will surely again reformat displayed information the public sees, and there is no way to know the nature of the internal decision process (and no judicial review if it occurs).

The removal of content at the behest of political leaders, subject to jail or fine, is a scenario playing out worldwide, with free expression the conceptual victim while real voices against tyranny can get silenced. George Washington University law professor Jonathan Turley deemed France one of the greatest global threats to free speech; yet rather than rise against this and defend free expression on principle Facebook has pledged a partnership with French courts against online hate speech (but not against, say, socialist speech and content, a contradiction to which we must turn later) in which it will hand over information about user/suspects. 

Social media yielding to more authoritarian governments has and will have grave repercussions for social, cultural and political expression. While U.S. citizens are protected by the First Amendment and one would naively think the Constitution would need to be amended on this score, maybe not. Social media giants and international governments engaging in censorious consultative alliances and frameworks incorporating politically derived norms threaten free expression even in the U.S.. This is reflective of a broader phenomenon of nations exporting Internet laws like privacy standards globally in order to permit local operations.

Ominously, given the fervor over online speech one would expect U.S. politicians to seek workarounds regarding that nettlesome First Amendment and Constitution — and they do. This can also even materialize in unexpected ways. For example, the prevailing public-utility mythology, antitrust intervention and the anti-property administrative state apparatus (that some Republicans confoundingly favor expanding to monitor political objectivity) all combine with the wider phenomena of (1) regulation without implementing legislation and (2) deference to agency interpretations and guidance in ways that already circumvent constitutional protections. It’s not a leap to subsume speech in all this anti-constitutional behavior.

While private entities are incapable of censorship, reconfiguring as semi-governmental oversight bodies with ability to suppress would change that entirely. It would also nullify big tech platforms’ status as market entities and convert them into the essential facilities that they otherwise could never be. This puts conservatives in a bind and is part of the furor of the current debate over online regulation.

The pertinent risks, then, may not be the ill-considered reforms of Section 230 immunities that get so much ink; rather, regulation or laws otherwise “set[ting] baselines” could perform an end-run around 230, yet constitute an “indemnification” perhaps more powerful for Facebook’s new incarnation. That is, imposed general baselines would effectively moot Section 230 for alternative social platforms that would have been otherwise indemnified on content henceforth forbidden by “baselines for what’s prohibited.”

Any semblance of securing compromises and adoption of industry standards certified by governmental bodies that apply to everyone would unfairly rule out for alternative platforms (existing or emergent) the unfettered free expression that Facebook and other big-tech players heretofore benefited from. In Facebook’s case, this would help it to avoid breakup and fines and further a politically convenient end game of rules that are tough yet tolerable, yet help ensure it not lose its dominant position or become the next MySpace.

Paradoxically, Zuckerberg’s “New Rules” are even compatible with compromising with conservatives such as Sen. Josh Hawley (R-Missouri) and his “Ending Support for Internet Censorship Act” with its proposed requirement for demonstrating political objectivity in order to secure an “immunity certification” from the FTC to remain protected by Sec. 230. This is because the culmination of “New Rules” would effectively be an entirely new business model for the biggest of big tech: peace with central oversight of speech in an environment in which Section 230 would be less critical to their success. In one interpretation, as Vox observed:

“[M]aybe the giant platforms are now so enormous that they don’t need to distribute an infinite amount of content anymore—maybe they could survive by bringing in enormous but manageable amounts of content, which they could actually review before publishing—kind of like a media company. This used to be an unthinkable thought, and still seems to be if you talk to the people who run the platforms. But maybe that’s where we are headed, like it or not.”

Zuckerberg got plenty attention this year, and I don’t mean to unfairly focus on him and his potentially regulatory campaign, because a broader and emphatically pro-regulatory campaign looms. In July 2018, nearly a year before Zuckerberg’s declaration, the Columbia Journalism Review reported on a leaked draft white paper from Sen. Mark Warner containing a wide-ranging slate of proposals to regulate content, speech and (most ominously to me) anonymity This dramatic and paternalistic intervention is presumably necessitated by the existence of online trolls, misinformation and election interference. Called “Potential Policy Proposals for Regulation of Social Media and Technology Firms,” Warner expressed alarm that “bots, trolls, click-farms, fake pages and groups, ads and algorithm-gaming can be used to propagate political disinformation.” 

Hoo boy. Trial balloon solutions included:

  • labeling of automated “bot” accounts;
  • limiting certain elements of anonymity;
  • placing limitations on Section 230 immunity for re-uploaded content;
  • liability for defamation, false light and deep fakes;
  • and defining certain services to be essential facilities. 

The loss of the civil right of anonymous online speech is an unexplored vulnerability (or intended consequence?) created by the various social media regulatory campaigns. (See here and here for some aggravated fretting over the foundational importance in America of the right to anonymous speech.) 

Social media firms bear no obligation to enable anonymity for anyone, of course, any more than they have the duty of objectivity that conservatives mistakenly think they have.

But regulation could make it such that other vendors could not provide that vital capability of mass-anonymous speech; and in the bargain regulation would protect incumbents from the competitive necessity to do so or be replaced. As is the case with the tech regulatory proposals from Republicans, the federal administrative state apparatus would inevitably expand under Democratic alternatives like Warner’s, with bodies like the FTC and FCC appointed to oversee and punish. Facebook’s Zuckerberg was perhaps unsurprised by Warner’s proposals, which had already been “circulated in tech policy circles” and helpfully did not recommend corporate breakup. Sen. Warner later reacted favorably to Zuckerberg’s new-rules-for-the-Internet manifesto, declaring: 

“I’m glad to see that Mr. Zuckerberg is finally acknowledging what I’ve been saying for past two years: The era of the social media Wild West is over….Facebook needs to work with Congress to pass effective legislative guardrails, recognizing that the largest platforms, like Facebook, are going to need to be subject to a higher level of regulation in keeping with their enormous power.”

Guardrails? Warner really did invoke “guardrails” imposed by the likes of himself. Social media possesses guardrails now, and a necessary but forgotten one of them is alertness and conscientiousness and “adulting” on the part of the user (for him or herself, and on behalf of dependents). Competition for authentication and anonymity and other kinds of dynamic content filtering and discipline can and will add still more urgently needed guardrails.

It is legislation and regulation, not their absence, that threaten to remove guardrails—and undermine America’s bedrock principle of free expression in the process. 

The dangers of social media company, legislative and “watchdog”-backed mandates to censor speech and otherwise regulate “harmful content” are themselves the harms facing the Internet of today and the splinternet of tomorrow. Some authoritarian-minded interventionists seek a pre-ordained deplatforming of unpopular ideas and controversial debate and even pretend they protect democracy. The public and policymakers need to remain on high alert, with a finger hovering over that “Delete” button.

Originally published at Forbes.