Biden’s New AI Executive Order Is Regulation Run Amok

Photo Credit: Getty

Signed yesterday, President Biden’s new executive order on artificial intelligence safety is already making waves across the technology industry. While the intention of the order is ensuring the responsible development and safe use of AI, the result of the order is likely to be entirely different. The order suffers from a classic “Ready! Fire! Aim!” mentality, jumping the gun with overly prescriptive regulations before assessing the nature and significance of the problem it aims to solve. This may prove one of the most dangerous government policies in years.

Coming in at more than 100 pages in length, the executive order is a directive across the “whole of government” to begin regulating this sweeping new technology that has the potential to revolutionize entire sectors of the economy and our lives, from education to healthcare to finance. The order directs or makes requests of countless federal agencies, from the Departments of Energy and Homeland Security to the Consumer Financial Protection Bureau and the Federal Housing Financing Agency, just to name a few. These agencies, in turn, have the authority to issue regulations that carry the force of law.

One of the order’s more important mandates requires that companies developing the most advanced AI models report to the government information on model training, parameter weights, and safety testing. Transparency about results of safety tests sounds practical, but in reality it could discourage tech companies from doing more testing, since results need to be shared with the federal government. Moreover, the very essence of AI research is iterative experimentation, and this mandate could bog down companies in red tape and reporting when they should be tweaking their models to improve safety. Given these tradeoffs, it’s unclear that all the reporting will improve safety for anyone.

Rather than try to identify problems and think of targeted solutions, the order simply assumes that factors like computing power and the number of model parameters are the right metrics to begin assessing risk. No evidence is offered to justify these assumptions. Other components of the order are similarly overly-simplistic. For example, it directs the Office of Management and Budget, Commerce Department, and Homeland Security Department to identify steps to watermark AI-generated content. This is a bit like using a band-aid on a bone fracture. Sophisticated bad actors will be able to remove watermarks or produce high-quality deepfake content without them.

Also problematic is that the data sharing requirements in the order may be illegal. In recent years, progressives have argued the Defense Production Act (DPA) should be used to advance a variety of fashionable political causes. The DPA is a 1950 law intended to make it easier for the government to influence private production during wartime. Already stretching the intent of the law, former President Trump used DPA authority to ramp up government purchases of ventilators during the Covid-19 pandemic. Now, Biden is using these powers to direct tech companies to turn over proprietary AI data.

Read the full article at Forbes.