Trump’s AI plan clears the field — then occupies it

Photo Credit: Getty

Preempting state overreach in artificial intelligence (AI) regulation is urgent — but it’s only half the job. Firm limits on federal power matter just as much. Yet Washington rarely clears the field without planning to occupy it. The window for preserving any meaningful separation between technology and the state is closing — and what replaces it will reverberate for generations.

The White House’s National Policy Framework for Artificial Intelligence (press release, section-by-section summary) was delivered to Congress on March 20, just two days after Sen. Marsha Blackburn’s (R-TN) 391-page legislative draft, the “TRUMP AMERICA AI Act.”

That acronym must be seen to be believed: “The Republic Unifying Meritocratic Performance Advancing Machine Intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act.”

Blackburn’s press release declared, “Congress must answer [Trump’s] call to establish one federal rulebook for AI to protect children, creators, conservatives, and communities across the country and ensure America triumphs over foreign adversaries in the global race for AI dominance.”

If conservatives and libertarians believe their own free-market rhetoric, they know other nations will not achieve durable AI dominance by picking winners and subsidizing them into stagnation — and we should not emulate that here.

Avoiding a fragmented state-by-state regime is necessary, as states lack the authority to impede interstate commerce. But the Trump/Blackburn ‘4 Cs’ architecture — focused on children, creators, conservatives, and communities — embodies not deregulation, but a single national rule edifice laden with duties, liabilities, and reporting burdens.

A fifth ‘C’ would be “cartelization,” since Blackburn’s proposal (like Trump and Biden executive actions before it) “[p]romotes partnerships between government, business, and academia to advance AI research.” Like a Mar-a-Lago invitation, those partnerships are unlikely to include you, me, or startups.

Unrelated long-sought tech regulatory ambitions are also folded in. One is the dismantling of Section 230 of the Communications Decency Act, which shields online platforms from liability for user-generated content and moderation decisions. Other add-ins include the household surveillance measure known as the Kids Online Safety Act (KOSA), and the premature, digital-ID-adjacent reactions to deepfakes like the NO FAKES Act.

In the administration’s framing, uncertainty over AI requires “strong Federal leadership to ensure the public’s trust in how AI is developed and used,” and it proclaims six priorities: protecting children and empowering parents; strengthening communities; safeguarding intellectual property; preventing censorship while protecting free speech; promoting innovation and US leadership; and building an AI-ready workforce through education.

The pot calling the kettle black on AI harms

Those are worthy goals, but their resolution does not lie in Washington. The real AI problem is not private-sector misbehavior or a lack of government oversight. Rather, it is government itself as an AI actor — weaponizing, funding, deploying, steering, indemnifying contractors, and displacing market disciplines that should constrain these systems.

Even recent headlines about firms like Anthropic clashing with the Pentagon over demands for unrestricted military and surveillance use miss the larger point. The more relevant question is what the vast, multi-hundred-billion-dollar federal contracting apparatus does comply with — and how that shapes the trajectory of AI. The Trump/Blackburn frameworks are largely silent on constraining federal use, beyond avoiding “woke” outputs. Already, there exists an inappropriate heavy federal influence in the largest-scale infrastructures, including communications, electrical grids, airspaces, and more. Yet government deployment — paired with its tendencies toward surveillance, deplatforming, and click-and-swipe regulation — poses the greater threat to liberty.

Disregard of enumerated powers and over-delegation

The Trump/Blackburn centralization entails new powers for agencies ostensibly slated for reduction. The bill grants sweeping authority to issue and develop sub-regulatory guidance. This architecture will enable future progressive administrations to flip the script and impose the very censorship, bias enforcement, and viewpoint policing that these frameworks purport to forbid.

Notable here are the expanded roles of the Federal Communications Commission and Department of Commerce in age verification, and of the Federal Trade Commission in governing chatbot “design, development, and operation” to purportedly “prevent and mitigate harms to users,” as are FTC audits of high-risk systems for viewpoint discrimination.  

Permanent new powers for the Department of Energy include agreements with data centers to “advance the self-sufficiency of the covered entity and protect the people,” picking up where Trump’s early March Ratepayer Protection Pledge left off. Pledges and coordination may have a limited role, but true network liberalization requires far more — expanded access, ownership flexibility, and infrastructure growth beyond federal supervisory schemes.

The Energy Department is also charged with “standardized and classified testing and evaluation of advanced AI systems to collect data on the likelihood of adverse AI incidents for a given advanced AI system” and with implementing testing protocols and third-party assessments. Meanwhile, militarization gets a pass — apart from steps preventing inappropriate data sharing with foreign powers, the Department of Defense gets no mention in the draft, despite being the 800-pound gorilla of AI deployment. Neither does the Department of Homeland Security, apart from its involvement in a “toolkit for best practices” with respect to cybersecurity incidents.

The Department of Labor and the Office of Management and Budget (OMB) would oversee publicly-available quarterly reports on AI-related hiring, firing and displacements by publicly tradable companies.

Blackburn’s draft also charges OMB with enforcing “Unbiased AI Principles” in federal contracting (among these are truthfulness, historical accuracy, scientific inquiry, and objectivity, ideological neutrality, and nonpartisanship). This effectively nationalizes contested judgments about bias and speech, using acquisition policy to steer, a phenomenon popular during the Biden administration and that a future progressive administration will gleefully exploit. Biden’s Disinformation Governance Board, for instance, saw itself as non-biased.

Rather than devolving authority, the framework codifies entities like the Center for AI Standards and Innovation and expands roles for the National Institute of Standards and Technology and National Science Foundation (NSF), including testbeds, grants, and grand challenges. It would also embed the National Artificial Intelligence Research Resource within the NSF. On copyright, skepticism toward fair use in AI training pairs with expanded enforcement and standards-setting. It is doubtful Washington will solve deepfakes through such means.

A new Kids Online Safety Council would “provide reports to Congress with recommendations and advice on matters related to the safety of minors online.” It would also include parents, academics, educators, as well as online platform and video game representatives.

A proper national framework regulates Washington

A few short years ago, Biden’s AI policy framework leaned heavily on subsidies, public-private partnerships, and voluntary commitments yielding federally-blessed blueprints to cartelize and steer development, bake in ideological standards and impose safety, security, and trust pledges.

Trump’s framework criticizes that model as burdensome, yet risks replicating it in consolidated form. The reality is that many tech CEOs and trade groups largely supported and helped frame Biden’s AI approach as a constructive step for governance, safety, and competitiveness. One leader called it “another critical step forward.” Much like Trump’s affinity for price controls, partial nationalizations, and other industrial policy forays, federal AI consolidation will be embraced by progressives. While Trump’s December outline claimed that “Congress should not create any new federal rulemaking body to regulate AI,” it allowed that Congress “should instead support development and deployment of sector-specific AI applications through existing regulatory bodies with subject matter expertise and through industry-led standards.” Under limited government, that could be innocuous. Instead, the next logical, but unwelcome step beyond Trump/Blackburn would be discrete sectoral regulation of AI. While Trump might express a vision distinct from Biden’s, they are on converging tracks.

Regrouping is needed. The best federal legislation would simply specify that, for the most part, the federal government is prohibited from regulating AI. Proper AI model legislation would entail banning subsidies, public-private partnerships, and coercive national frameworks in AI; reducing federal AI deployments (that is, restoring limited government that does not require AI to function); advancing private insurance and warranties by avoiding indemnification in the sweeping contracting/procurement state; protecting dissident speech of all kinds; and halting surveillance creep and the click-and-swipe regulation increasingly enabled by the AI-powered Internet of Things.

Is compressing 50 regulators into one progress? It can be. But here we get preemption alongside consolidation — nationalized, structured, and prematurely legitimized. This is not a slippery slope; it begins fully formed as a national rulebook with duties of care, liability exposure, and content mandates layered onto AI developers. What starts as light touch is already positioned for expansion.

Itchy legislative trigger fingers turn dynamic sectors into managed ones; and in this case, potentially lobotomized sectors.

For more see:

Careful: Misbegotten Government-Business ‘Blueprints’ Can Lobotomize Artificial Intelligence,” Forbes

Trump’s AI order: Preempting the states without unleashing Washington,” Competitive Enterprise Institute

Deepfakes and Beyond: Who Wins if Social Media Platforms are Regulated,” Forbes

Artificial Intelligence Model Legislation and Bill of Rights Regulating Government—Not Private Competitive Enterprise,” Social Science Research Network