Trump’s AI order: Preempting the states without unleashing Washington
Photo Credit: Getty
A new Trump executive order, “Ensuring a National Policy Framework for Artificial Intelligence,” pushes back on most state law and rulemaking affecting AI.
Top White House aides reinforced Trump’s call for a unified “framework,” while noting that the order “does not mean the administration will challenge every State AI law.” States remain defiant, but Trump’s order is a step that Congress needs to reinforce if it wants to prevent state legal patchworks from disrupting this nascent sector and harming U.S. competitiveness abroad.
Congress takes a pass, so far
Congress has unfortunately passed on recent opportunities (such as defense authorization) to preempt a growing state AI hodge-podge. At the same time, however, Congress should not get too trigger-happy imposing its own AI rules in ways that induce federal centralization and government steering in place of competitive enterprise—an outcome that could be as bad as, or worse than, what states are doing.
That caution is warranted. It is hard to expect sane, hands-off AI policy from the same Congress that just enacted the exceedingly interventionist and costly Coronavirus Aid, Relief, and Economic Security Act, the Infrastructure Investment and Jobs Act, the Inflation Reduction Act, and the CHIPS and Science Act.
Biden’s approach ditched
Trump’s Framework revokes what he called Joe Biden’s “attempt to paralyze this industry.”
Biden’s AI policy framework leaned heavily on subsidies, public-private partnerships and elite-player “voluntary commitments,” yielding federally blessed “blueprints” to cartelize and steer development, bake in ideological bias standards and impose “safety, security and trust” pledges. The result was an expanding administrative state reach over speech, markets, employment policy, and innovation across the entire AI ecosystem.
Trump’s federalization gamble
Trump’s order authorizes Department of Justice lawsuits against states’ interference in interstate commerce, tasks the Department of Commerce with identifying “onerous laws that conflict” with federal policy, and conditions receipt of certain federal funds on compliance with Washington preferences. It also deploys the Federal Trade Commission and the Federal Communications Commission (FCC) in preempting state policy on matters such as AI-model output results. The framework further calls for White House legislative recommendations to “ensure that children are protected, censorship is prevented, copyrights are respected, and communities are safeguarded.”
Slotted into proper lanes, some of this is necessary; but from the 30,000-foot level, the order does not so much deregulate AI as federalize it, and that’s where the potential danger lies. While Trump positions his order as restoring innovation where Biden’s was burdensome and inhibiting, the reality is that some tech CEOs and trade groups largely supported and helped frame Biden’s AI order as a constructive step for governance, safety and competitiveness. Microsoft’s Brad Smith, for example, had called Biden’s approach “another critical step forward.”
Much like Trump’s affinity for price controls and for partial nationalizations of firms like Intel, some of these federal consolidation tools will be welcomed by progressives, but deployed to different ends. Industry’s prior alignment with Biden suggests it may not resist a post-Trump progressive regime with very different notions of what a federal “framework” should mean. Progressives, after all, are better at expanding government than their alleged opposition in Congress is at limiting it.
Already, in response to Trump’s order, FCC Commissioner Brendan Carr has initiated a “proceeding to determine whether to adopt a Federal reporting and disclosure standard for AI models that preempts conflicting State laws.” Any progressive administration inclined toward whole-of-government equity or speech controls in model output could have a field day with such federal disclosure mandates.
Similarly risky, Trump’s order indicates that states with “onerous AI laws” will be ineligible for Federal broadband subsidies under the Broadband Equity Access and Deployment (BEAD) Program. That’s fine as a stopgap, but BEAD has already been implicated in rampant backdoor regulation of the likes of climate and labor policy. It needs termination, not reinforcement as a federal cudgel. Telecommunications is only one of several sectors that need expanded deployment of private—not government-led—infrastructure assets.
Separation of AI and State
There is an unavoidable tension between preventing undue interference with interstate commerce and preserving the proper role of states as laboratories that allow opting out and voting with one’s feet. Those distinctions are legitimate and not unique to AI.
The deeper problem is that once Congress sinks its teeth in, it will almost certainly go far beyond merely preempting state AI schemes. Ours is a nearly fully networked society in which the private sector is only nominally in control of the largest-scale infrastructures. Just as there now exists heavy federal intervention in communications, electrical grids, airspaces and more, many agencies and departments already widely deploy AI. In some respects it is their uses of the technology and tendencies toward surveillance, deplatforming, censorship and other click-and-swipe regulation that pose the greatest threats to our liberties—moreso than private deployments or even state rules.
Congress should act, but at the same time declare itself highly allergic to federal legislation in frontier sectors. Anything beyond preemption of inappropriate state meddling is a recipe for centralization and cartelization.
That forces the real question: if a federal law, then what kind of federal law? Clean preemption would be best, but likely unattainable. We are not starting from a position of limited government, and we must remain vigilant against federal provisions that prove worse than some state interventions, especially when borders can still be crossed.
A proper “national” AI framework is one of privatization, not centralization. It begins with Congress recognizing the primacy of deep-cleaning federal activity: banning subsidies, PPPs and coercive national frameworks; reducing federal AI deployments and restoring limited government (the Constitution does not require AI to function); protecting dissident speech of all kinds; halting surveillance creep and the click-and-swipe regulation increasingly enabled by the IoT; and fencing in the administrative state generally.
Trump’s Order will have succeeded if it induces Congress to go little farther than limited preemption. But we must guard against expanding federal power under the banner of limiting state power. Much of the debate will be framed as a race with China. The best response—beyond the domestic liberalizations just noted—is to let China inefficiently subsidize its AI sector, while never copying that doomed model here.
The best federal legislation is the kind that specifies that, for the most part, the federal government cannot regulate AI. The remedies now being offered instead point toward new centralization and new powers for agencies ostensibly slated for termination or substantial reduction. Those missteps will harden the architecture future administrations will use for censorship, mandated bias, surveillance and procurement-driven cartelization—tasks at which progressives excel.
For more, see:
“Artificial Intelligence Model Legislation and Bill of Rights Regulating Government—Not Private Competitive Enterprise,” Social Science Research Network
“Careful: Misbegotten Government-Business ‘Blueprints’ Can Lobotomize Artificial Intelligence” Forbes