CEI Comments on Technology Platform Censorship

RE: Request for Public Comment Regarding Technology Platform Censorship

Docket No.: FTC-2025-0023

Introduction

The authors of this submission would like to thank the Federal Trade Commission (FTC) for the opportunity to comment on the agency’s “Request for Public Comment Regarding Technology Platform Censorship.”  Founded in 1984, the Competitive Enterprise Institute is a non-profit research and advocacy organization focused on regulatory policy from a free market perspective.

As the Commission considers utilizing its authority to combat what it views as “Technology Platform Censorship,” it is walking into a constitutional minefield.  The First Amendment to the U.S. Constitution protects the editorial decisions of technology platforms.  While the FTC’s request for information (RFI) labels tech platforms’ content moderation as “Censorship,” it is not. Content moderation is speech, and speech is not censorship.

The Commission has the opportunity under a new administration to restore stability, predictability, and legitimacy in its enforcement of the FTC Act. The agency, under the prior administration, implemented less predictable guidance and overburdensome rules in addition to pursuing risky litigation. With new leadership, the FTC could undo these regressive policy choices. Unfortunately, this RFI keeps the FTC on the same perilous course set by the previous agency leadership.

FTC Chair Andrew Ferguson has repeatedly stressed that the Commission has limited resources:

The Commission has limited resources. Every enforcement action we bring consumes a large chunk of those resources and may do so for many years as the case wends its way through court. We must be choosy about how we commit the taxpayers’ resources and ensure that we get the biggest bang for their buck.  

While there are disagreements as to whether the purpose of this inquiry is within the agency’s authority, we aver that the RFI, and any subsequent litigation, rulemaking, or 6(b) study, would certainly not be the best use of the FTC’s time and limited resources.

Further, if the Commission decides to pursue an action that runs afoul of the First Amendment, it will harm the legitimacy of the agency. Scholarship on the history, personnel, culture, evolution, and shortcomings of the FTC is robust. Statements and speeches given by past chairs, commissioners, and bureau heads, are often cited and evaluated decades after their time served on the Commission.  

The Competitive Enterprise Institute and TechFreedom recently hosted a full day conference on the constitutional limits of the FTC’s inquiry into “censorship.”  Jonathan Emord, who practices both constitutional and administrative law, said of the FTC’s RFI: 

In the advent of Loper Bright, the overruling of Chevron, we now have in this instance not only a First Amendment violation but there’s no statutory foundation in the Federal Trade Commission Act, in Section 5, 12, or 45, for the Federal Trade Commission to engage in this kind of an inquiry in the first instance. So, there’s really an absence of both statutory authority, so it’s ultra vires. It is also implicitly carrying on Chevron because it’s saying that we don’t need any specific statutory grant of authority. We just have this inherent ability to progress beyond the statute to reach these things. And then there’s the obvious First Amendment problem which arises when they overtly state in their solicitation for comment that their intention is to look at content. So, I mean it seems to be that the hand grenade has the pin pulled. They’re just holding it. And so, the inevitable course should be I think to either have them come to that realization on their own that they lack such power, which would be a delightful turn. Or if they bend to political pressure, as would be expected, to produce something that would be immediately challengeable on multiple grounds.       

We hope that these comments are helpful to the Commission in moving forward in a way that is consistent with the U.S. Constitution.

The First Amendment

When technology platforms block, deprioritize, hide, disclaim, remove, or demonetize content, they are engaging in constitutionally protected editorial discretion. When technology platforms deny, degrade, demonetize, deprioritize a user’s access to their platform based off speech content or affiliations, they are engaging in constitutionally protected editorial discretion. As the Supreme Court said in Moody v. NetChoice,

[M]ajor social-media platforms are in the business, when curating their feeds, of combining “multifarious voices” to create a distinctive expressive offering. The individual messages may originate with third parties, but the larger offering is the platform’s. It is the product of a wealth of choices about whether—and, if so, how—to convey posts having a certain content or viewpoint. Those choices rest on a set of beliefs about which messages are appropriate and which are not (or which are more appropriate and which less so). And in the aggregate they give the feed a particular expressive quality.

If the FTC proceeds with an action, as contemplated by the RFI, it will interfere with the constitutionally protected editorial choices of technology platforms. “When the government interferes with such editorial choices . . . it alters the content of the compilation.”

Content Moderation

Technology platforms engage in content moderation to remove otherwise objectionable content: profanity, nudity, harassment, threats, illicit activity, or distasteful content.  Similarly, the Federal Trade Commission has restricted the public’s access to materials submitted to this RFI for reasons as vague as “inappropriate,” as pointed out by Daphne Keller, director of the Program on Platform Regulation at Stanford’s Cyber Policy Center.  The example provides important context for the FTC’s RFI. “Any open system needs moderation rules and enforcement, or it quickly fills up with inappropriate content, profanity, and personal information,” as TechDirt’s Mike Masnick put it.

Question 4 of the RFI inquires as to how content moderation affects users and “content creators.” Technology platforms incur difficulty when faced with borderline content that may be considered suggestive or inappropriate but does not explicitly violate its content policies. One example involves pole dancing, an activity typically associated with erotic and nude dancing. However, there has been a rise in non-nude, pole dancing classes as an alternative form of fitness and exercise.  In turn, an online community of bloggers and social media personalities have emerged. Fitness studio owners often use social media to reach new customers for their small business, and some use the platforms themselves for monetization.  They even have a trade association.

In 2019, content creators began complaining about purported “shadowbans” of their content and certain hashtags associated with pole dancing accounts, limiting those users’ ability to reach new customers, sell products, and monetize engagement.  There are many misconceptions surrounding the concept of “shadow banning.”  Even so, as Justice Gorsuch wrote in his concurrence in TikTok Inc. v. Garland, “One man’s ‘covert content manipulation’ is another’s ‘editorial discretion.’”

The Supreme Court in Barnes v. Glen Theatre, Inc. said that “nude dancing of the kind sought to be performed here is expressive conduct within the other perimeters of the First Amendment, though we view it as only marginally so.”  However, some content creators perform non-nude pole dancing, but the content may still maintain a suggestive nature. How should technology platforms moderate this type of borderline content? Should they follow Erie v. Pap’s AM and evaluate the “secondary effects, such as impacts on public health, safety, and welfare”?  This example illustrates the nuanced and often subjective nature of content moderation. Couple that complexity with the scale of hundreds of millions of third-party posts a day, and the absurdity and impracticality of the FTC acting as an arbiter becomes obvious.

Any inquiry into the legality of certain content moderation decisions would require the FTC to evaluate the content itself and ask: Is this content suggestive? Is this content inappropriate? Technology platforms are far better equipped than a politically influenced and resource constrained FTC is to make those decisions. And, unlike government actors, private platforms are protected by the First Amendment in doing so.   

Consumer Protection

A substantial portion of the questions included in the RFI appear to be oriented towards the Commission’s consumer protection authority under Section 5 of the FTC Act prohibiting unfair or deceptive acts or practices (UDAP).  However, former FTC leadership and staff have voiced doubts as to whether the FTC can muster up a viable claim or remedy for a UDAP claim as contemplated by the RFI. “But it’s not clear how the conduct described (in rather broad terms) in the RFI suggests a UDAP action that would hold water. Perhaps there is a case out there, but nothing comes to mind. And compelled speech might not be a constitutionally viable remedy in any case,” according to Daniel J. Gilman, former attorney advisor in the FTC’s Office of Policy Planning.

As to deception, the FTC has previously declined to act against a media company due to potential conflicts with the First Amendment. In 2004, two progressive advocacy groups petitioned the FTC to act against Fox News for its “Fair and Balanced” slogan, asserting that it constituted deceptive advertising.  On the very same day, FTC Chair Timothy J. Muris released a statement rejecting the request:

I am not aware of any instance in which the Federal Trade Commission has investigated the slogan of a news organization. There is no way to evaluate this petition without evaluating the content of the news at issue. That is a task the First Amendment leaves to the American people, not a government agency.

Former FTC Chair Josh Wright co-authored an article in 2021 exploring possible UDAP theories against social media companies for representations made in their rules and terms of service regarding content moderation.  

The weight of the statutory authority and previous agency guidance statements mean that the FTC cannot use its Section 5 deception authority, in matters of speech on social media platforms—like with Twitter’s content flag—consistent with constitutional law. Chairman Muris’s laconic statement is an example for the agency to follow on First Amendment issues. First Amendment considerations, coupled with the materiality requirement for deception, place long odds on a successful Section 5 action against social media platforms.

As to fairness, the Commission would face an uphill battle in showing that there is substantial injury and that the injury is not reasonably avoidable by consumers. Consumers have numerous platforms from which to choose from. Further, countervailing benefits would far outweigh the purported harm. As former Chair Wright notes,

The inescapable conclusion is that most consumers value some content moderation as a good. Consumer preferences already demonstrated that the benefits of the platform’s content moderation choices provide countervailing benefits that outweigh any harm imposed upon them by the same choices.   

For example, when free speech platform Gettr launched in 2021, the platform quickly realized the need to moderate content beyond what was spam and illegal.  

Advertising

First Amendment protections don’t disappear because a company is engaged in making a profit.  A substantial portion of the technology platforms identified in the RFI are multisided platforms.  To be successful, companies “open up platforms to a wide range of contributors, while curating their platforms to minimize unpleasant surprises.”  Many of these platforms rely on advertising as their primary source of revenue. The environment in which these platforms operate is dynamic, often shaped by economic incentives, technological capabilities, user expectations, and brand reputation. To both maintain a healthy number of users and maintain advertising revenue, platforms must perform a balancing act when moderating content. Ben Sperry, senior scholar in innovation policy with the International Center for Law & Economics, explains in his 2024 Gonzaga Law Review article:

They are profit-seeking, to be sure, but the way they generate profits is by acting as intermediaries between users and advertisers. If they fail to serve their users well, those users will abandon the platform. Without users, advertisers would have no interest in buying ads. And without advertisers, there is no profit to be made. Social media companies thus need to maximize the value of their platform by setting rules that keep users sufficiently engaged, thus attracting advertisers who will pay to reach them.

Platforms must also be mindful of the wants and needs of advertisers, who are often concerned with brand reputation or brand safety.

Collusion by Compliance

If the FTC uses its consumer protection authority against technology platforms’ content moderation policies and terms of service, it could inadvertently foster the precise types of collusive behavior that it seeks to better understand in Question 6 of the RFI. Regulatory requirements, whether by rules, orders, or consent decrees, will induce market participants to adopt similar policies and practices, inadvertently incentivizing the same perceived homogeneous environment the general thrust of the RFI seeks to correct.

This phenomenon was discussed within the context of the FTC’s UDAP enforcement related to cybersecurity practices in a 2024 Atlantic Council report authored by Isabella Wright and Maia Hamin. Their analysis showed “how the FTC, armed with a mandate from 1914, has effectively constructed a body of ‘reasonable’ cybersecurity practices and clear precedent for their enforcement.”  Further, “[s]tudying the standards embedded within the ‘common law’ for consumer data security that the FTC has built through its cases offers an immediately useful foundation for the creation of cyber standards in the software liability context and beyond.”

Even complaints signal what practices the FTC considers illegal and will influence technology platforms’ decisions in formulating their content moderation and policies.  Uniform practices emerge as companies converge on similar content policies, driven by the common need to comply with or avoid regulatory attention. This will lead to more standardization and reduced differentiation, even when a market has numerous competitors. This type of enforcement would be contrary to the President’s recent executive order on “Reducing Anti-Competitive Regulatory Barriers,”  one that is meant to “increase[] options for consumers.”

Antitrust

The Commission should be mindful that the Texas social media law at issue in the Supreme Court’s decision in Moody v. NetChoice is an antitrust law. The Texas legislature found that “social media platforms with the largest number of users are common carriers by virtue of their market dominance.”  And the legislation targeted those platforms that had more than 50 million users. The Texas legislature determined that platforms with over 50 million users were illegal monopolies. And their proposed remedy was common carrier regulation.  What remedy would the Commission consider in its own case? As the Court in Moody said, “However imperfect the private marketplace of ideas, here was a worse proposal—that government itself deciding when speech was imbalanced, and then coercing speakers to provide more of some views or less of others.”  The First Amendment is an insurmountable hurdle for the FTC.

Question 6 of the RFI asks whether platforms’ content moderation decisions were made possible by a lack of competition. The answer to that question is emphatically no, because there is robust competition among technology platforms. Relatedly, Question 6 asks if content moderation practices and policies affect competition. The answer to that question is emphatically yes, because dissatisfaction by some users and creators led to new entrants in the market.

On October 20, 2021, the Trump Media & Technology Group and the Digital World Acquisition Corp. announced a merger in addition to plans to launch a social media network called Truth Social.  The Truth Social mobile application launched on the Apple App Store on February 20, 2022, and soon became the number one ranked free app downloaded on the App Store.  The demand for the Truth Social app was so high that many consumers were put on waitlist, receiving the message “Due to massive demand, we have placed you on our waitlist.”

Additionally, on April 14, 2022, Elon Musk made a $43 billion offer to purchase the social media platform Twitter, a social media company that garnered much criticism for its content moderation policies.  Speaking on the bid during a TED Talk in Vancouver, Canada, Musk said, “I think it’s very important for there to be an inclusive arena for free speech.”  The deal closed in October 2022.  In July 2023, Musk rebranded Twitter by changing the name of the social media platform to X.  It should be noted that rebranding, particularly company name changes, holds extraordinary risks.  Despite losing close to a million users upon taking over Twitter, Musk has said that X is now worth more than the $44 billion he paid for it.

Further, former Twitter users that were dissatisfied with the new content moderation policies of X under Elon Musk led to the rise of additional competitors, like Mastodon and Bluesky.  Bluesky surpassed 30 million users in February 2025.  

Collusion or Collaboration?

Questions 6(a) through 6(c) inquire about certain collaborations between technology platforms on content moderation, suggesting that such actions could be “collusion.” However, collaboration is not always anticompetitive collusion under the antitrust laws. Oftentimes, large and small platforms collaborate to counter terrorist content and fight child exploitation material, according to antitrust attorney, Imanol Ramírez.  Ramírez wrote in a 2020 paper for the Havard Law School Antitrust Association that

To some extent, major platforms share their moderation tools with smaller companies and cooperate in industry-wide efforts to tackle illegal and harmful content. The Global Internet Forum to Counter Terrorism, which was founded by Facebook, Microsoft, Twitter and YouTube, allows companies to automatically remove terrorist content using a unique digital signature of an image, known as hash, that is matched against a database containing previously identified illegal content. Also, Microsoft and Dartmouth College donated PhotoDNA to the National Center for Missing & Exploited Children, a hash matching technology to find and remove images of child exploitation, which is shared with smaller technology companies, developers and other organizations.

American tech companies invest in mechanisms that not only make the internet safer for Americans, but for the entire world as well.

Ramírez also notes that laws and regulations governing how technology platforms moderate content can create significant competition barriers. He points to efforts by Germany, France, United Kingdom, India, Thailand, and Australia to punish tech platforms for not removing content.  Likewise, as discussed in Murthy v. Missouri, the White House pressured U.S. tech companies to take down content involving asserted COVID-19 misinformation under the threat of possible antitrust reforms aimed at the companies.

On December 11, 2024, the FTC and DOJ jointly withdrew the Antitrust Guidelines for Collaborations Among Competitors (2000).  The Republican commissioners were right to vote no on the withdrawal of the guidelines, as Commissioner Ferguson said, “a mere 40 days before the country inaugurates a new President.”  Commissioner Ferguson was also correct in noting that the “Commission should from time-to-time revisit its nonbinding guidance to ensure that it properly informs the public of the Commission’s enforcement position, promoting transparency and predictability.”  Further, Commissioner Holyoak echoed disapproval of voting to withdraw the Collaboration Guidelines “right after an administration-changing election.”  And Holyoak noted that the Commission did so “without providing any replacement guidance, or even intimating plans for future replacement” which “leaves businesses grasping in the dark.”

Likely all five commissioners, at the time of the withdrawal, would agree that the nearly 25-year-old Collaboration Guidelines needed modernization to some extent.  After all, the 2000 Collaboration Guidelines were contemplated and issued before the terrorist attacks of September 11, 2001. Given that these guidelines had been operative for the better part of the last quarter century, its framework is not irrelevant to the FTC’s RFI. The antitrust authorities should not pursue aggressive actions against technology platforms that properly relied on the 2000 Collaboration Guidelines. At the same time, the antitrust agencies should likewise not pursue aggressive actions that ignore modern advances in technology, evolving national security threats, and heightened consciousness regarding public health crises that are not reflected in the 2000 Collaboration Guidelines.

The FTC and DOJ properly recognized over a decade ago that the 2000 Collaboration Guidelines were insufficient in providing the necessary guidance to businesses on information sharing that could “help secure the nation’s networks of information and resources.”  On April 14, 2014, the FTC and DOJ issued their Antitrust Policy Statement on Sharing of Cybersecurity Information.  The agencies issued the policy statement to “make it clear that they do not believe that antitrust is – or should be – a roadblock to legitimate cybersecurity information sharing.”  Further, the policy statement states

Our modern economy and national security depend on a secure cyberspace. Core features of our nation’s cybersecurity strategy are to improve our resilience to cyber incidents and to reduce and defend against cyber threats. One way to make progress on these fronts is by increasing cyber threat information sharing between the government and industry, and among industry participants. In his February 2013 Executive Order, the President highlighted the important role the government can play in sharing information with U.S. private sector entities, while ensuring that privacy and civil liberties protections are in place. Another important component of securing our IT infrastructure is through the sharing of cybersecurity information between and among private entities.  

There is no indication that the 2014 Antitrust Policy Statement on Sharing Cybersecurity Information has been withdrawn or rescinded.   

The Federal Bureau of Investigations (FBI) and the Cybersecurity and Infrastructure Security Agency (CISA) routinely alert the public and businesses of legitimate cyber security threats using advisories, FBI Flashes, FBI Private Industry Notifications (PINs), and joint statements.  Former FBI Director Christopher Wray delivered remarks at the CISA National Cybersecurity Summit 2020 where he said:

At the FBI, we’re finding new ways to carry out our mission while keeping people safe, much like all of you. Because the threats we face don’t stop—even during a pandemic. . . . We’ve got to take an enterprise approach—one that involves government agencies, private industry, researchers, and nonprofits, across the U.S. and around the world. . . . That team approach is central to how we work with both the public and private sectors, from other government agencies, to companies of all sizes, to universities, to NGOs. . . . We might not be able to tell you precisely how we knew you were in trouble—but we can usually find a way to tell you what you need to know to prepare for, or stop, an attack. . . . Regulators like the FTC, the SEC, and state AGs often want to know whether a company is cooperating with law enforcement . . . . But we’re also engaged with election officials, campaigns, party committees, and social media companies to share information and enhance resiliency. You may have seen that Twitter and Facebook took down accounts associated with a Russian influence campaign trying to hire unwitting U.S. journalists and place political ads. Importantly, Twitter and Facebook did so before those accounts could develop anything more than a nascent following, based on information from our Foreign Influence Task Force.

At some point, the FBI and CISA’s joint mission in alerting companies of traditionally understood cybersecurity threats began to creep. The agencies began to aggressively communicate concerns about purported election-related misinformation, a development discussed at length in the Murthy v. Missouri case. The Supreme Court noted that

These agencies communicated with the platforms about election-related misinformation. They hosted meetings with several platforms in advance of the 2020 Presidential election and the 2022 midterms. The FBI alerted the platforms to posts containing false information about voting, as well as pernicious foreign influence campaigns that might spread on their sites. Shortly before the 2020 election, the FBI warned the platforms about the potential for a Russian hack-and-leak operation. Some companies then updated their moderation policies to prohibit users from posting hacked materials. Until mid-2022, CISA, through its “switchboarding” operations, forwarded third-party reports of election-related misinformation to the platforms. These communications typically stated that the agency “w[ould] not take any action, favorable or unfavorable, toward social media companies based on decisions about how or whether to use this information.”

The FBI and CISA’s mission creep was detailed in an interim staff report by the U.S. House of Representatives’ Committee on the Judiciary and the Select Subcommittee on the Weaponization of the Federal Government.  The Court in Murthy also noted that the White House, the Office of the Surgeon General, and the Centers of Disease Control and Prevention engaged with technology platforms about COVID-19 misinformation. The White House “pushed them to suppress certain content, and sometimes recommended policy changes.”  Also, it is worth restating that “White House communications officials called on the platforms to do more to address COVID-19 misinformation—and perhaps as motivation, raised the possibility of reforms aimed at the platforms, including changes to the antitrust laws and 47 U.S.C. §230.”  

The previous administration blurred the lines between cybersecurity and misinformation. It would be a better use of the Commission’s time and limited resources to request comments for new collaboration guidelines so that companies are not left “grasping in the dark.”  

Anonymous Comments

As the Commission considers comments submitted to this RFI, FTC staff should be deliberative in weighing anonymous comments without verifiable information. On July 9, 2024, the FTC published its first Interim Staff Report on Prescription Drug Middlemen.  Then Commissioner, now Chair, Andrew Ferguson issued a concurring statement as part of that proceeding where he cautioned against the overreliance on anonymous comments. He said,

My colleagues are correct that comments, even anonymous ones, are an important part of the Commission’s enforcement and 6(b) process. But we ought to treat anonymous comments with circumspection. After all, we cannot know who submitted the comments, nor do we have any method for verifying the accuracy of a single word they contain. We therefore cannot be sure how much weight, if any, to accord them as we try to understand these markets. The PBM Interim Staff Report nevertheless ascribes those anonymous submissions to independent pharmacies, or pharmacies generally, and treats their contents as fact.

Further, then-Commissioner Ferguson also criticized the FTC’s reliance on anonymous comments in promulgating its rule banning noncompete agreements in employment contracts. Ferguson said, “we announce a national rule of private conduct on the basis of a handful of empirical studies and unverifiable, often anonymous comments purporting to describe particular noncompete agreements . . . .”  

Roughly 714 of the comments submitted have been made so anonymously, as of May 20, 2025.  That’s more than 26 percent. The FTC reviewed over 1,200 comments in its study of Pharmacy Benefit Mangers (PBMs), and over 160 were submitted anonymously.  The percentage of anonymous comments submitted to this RFI (26.6 percent) is significantly higher than that of the PBM study (12.9 percent).

Conclusion

FTC leadership and staff are no doubt aware of these practical and constitutional problems attendant with regulatory and enforcement actions stemming from this RFI. With little chance of practical benefits for consumers and a low probability that the Commission’s legal arguments would be held up in court, pursuing this inquiry further would be a poor allocation of the FTC’s limited resources. It would invite questions about jawboning, agency politicization, and abuse of regulatory power. 

Respectfully submitted,

Alex R. Reinauer
Research Fellow
Competitive Enterprise Institute
[email protected]

Jessica Melugin
Director of the Center for Technology & Innovation
Competitive Enterprise Institute [email protected]