Biden Order on Artificial Intelligence Puts Too Much Faith in Regulators

This AI order risks too much regulatory harm in exchange for avoiding risks inherent in adoption of new technologies.

Photo Credit: Getty

President Biden’s executive order (EO) on artificial intelligence (AI) directs more than a dozen federal agencies to, among other things, “establish guidelines and best practices, with the aim of promoting consensus industry standards, for developing and deploying safe, secure, and trustworthy AI systems.” The order overestimates what regulators can accurately anticipate about a dynamic technology and ignores the challenges of actors outside the reach of U.S. law. It may also be unconstitutional in what federal agencies are allowed to do.

If you think bureaucrats do a better job encouraging innovation and mitigating risk than markets do, this order may seem reasonable. But if you’re a student of regulatory history, even one that doesn’t know much about AI or the details of the EO, you can already predict its harms. The order will leverage the government’s enormous buying power to shape the way AI technology is developed. That distortion of products, services, and uses that would otherwise be shaped in private markets may lead to worse, not better, outcomes.

Obvious national-security risks are one thing, but this order goes far beyond those concerns. Catching too much AI technology in this wide a net will almost certainly slow the progress of AI by thwarting this country’s long and successful tradition of “permissionless innovation.” In the real world, there’s no way to realize the enormous potential of AI and simultaneously eliminate all hazard. As Thomas Sowell reminds us, “There are no solutions. There are only trade-offs.” By inserting arbitrary government requirements in the name of eliminating risk or promoting particular outcomes, the market signal is muted and replaced with pessimistic a priori regulations. That’s problematic because the unelected bureaucrats the EO has charged with identifying risks and setting standards for AI are poor substitutes for a decentralized knowledge of market forces. It’s not that some of the concerns in the order won’t turn out to be valid, it’s that regulations degrade the market forces that would normally address the problems that aren’t anticipated. The federal government cannot escape its knowledge problem with political popularity or good intentions.

All of this hamstringing of U.S. AI firms with be compounded by the unfettered innovation of foreign firms. Worse, some of those actors will be affiliated with regimes unfriendly to the U.S. The gun-control debate is a useful parallel for thinking about AI regulation. Gun laws may shape the behavior of law-abiding citizens, but they don’t do as much to stop those willing to obtain firearms through other means. Similarly, creating restrictive rules for domestic firms won’t stop AI developers operating outside the reach of U.S. law. China and Russia will not be thwarted by the order’s concerns about “advancing equity.” But those parameters may chill the experimentation necessary for progress by U.S. firms, especially firms angling for a government contract. The U.S. should not cede global leadership on AI by overregulating it at home.

Moreover, because these regulations come in the form of an executive order and not a debated statute from elected members of Congress, none of the above-mentioned objections have been properly aired. Paradoxically, that may be the reason the edicts are ultimately challenged in court. Article I of the Constitution grants Congress sole authority to make laws, yet supporters of the order cheer the current administration’s run around that constraint. Maia Hamin, associate director with the Atlantic Council’s Cyber Statecraft Initiative under the Digital Forensic Research Lab, said the order “could be a path to getting something like a regime for pre-release testing for highly capable models without needing to wait on congressional action.” That path has its policy problems, but it should be one charted by Congress, if implemented.

Now, the Supreme Court may be the best hope for eventually stopping this regulatory overreach. Because the teeth of the order will come from the agencies making rules that carry the force of law, those rules may also be challenged under the “major questions” doctrine. The Court has grown increasingly concerned about agency overreach, as evidenced by its 2022 West Virginia v. EPA decision. While Congress has hosted summits and circulating various regulatory proposals for AI, it has not spoken definitively. The order’s de facto regulation of AI may very well be found unconstitutional, like other orders by the Biden administration on vaccine mandates and student-loan forgiveness.

Read the full article at National Review.