Free to Prosper: Artificial intelligence
The rapid development of artificial intelligence (AI) presents both significant opportunities and challenges for US policymakers. While AI offers the potential to revolutionize industries from healthcare to finance to transportation, it also introduces new risks, such as data privacy concerns, cybersecurity threats, and the potential for the spread of misinformation online. Congress should take a balanced approach to regulating AI—one that fosters innovation while addressing those risks where existing laws are demonstrated to come up short.
Many current discussions around AI regulation are driven by fears of highly speculative risks, such as a potential “AI apocalypse” where artificial general intelligence surpasses human intelligence and poses existential threats. While it is important to monitor long-term risks, Congress should focus on concrete, immediate risks in areas like data security and election interference, recognizing that many fraudulent practices are already covered by existing law.
Creating overly restrictive regulations in response to hypothetical worst-case scenarios would stifle innovation and place US companies at a competitive disadvantage to foreign adversaries like China. National security agencies will often be best equipped to address threats from bad actors.
Some critics have raised concerns about the energy consumption of AI technologies, particularly as AI models grow larger and more complex. However, AI’s energy use creates jobs and leads to follow-on innovations. It also drives investment in more efficient computing infrastructure and more energy sources. Instead of taxing or imposing blanket restrictions on AI’s electricity consumption, Congress should allow the market to drive energy efficiency improvements.
Commendably, companies in the AI sector have already begun implementing self-regulatory measures, such as establishing ethical guidelines and adopting responsible AI practices. Congress can recognize these efforts and praise them for their flexibility, while avoiding imposing heavy-handed rules that may discourage companies from taking proactive measures of their own. To help along this enormously promising technology, Congress should:
- Encourage evidence-based regulatory approaches;
- Resist calls for sweeping AI legislation; and
- Avoid creating a new federal AI regulatory agency.
Evidence-based approaches: Congress can require that any AI regulation be grounded in strong empirical evidence. Regulatory proposals should demonstrate that they are addressing an actual, measurable problem, rather than simply reacting to abstract concerns. This includes following a structured process to ensure effective policymaking: 1) demonstrating a problem exists, 2) defining the desired outcome, 3) identifying alternative solutions, and 4) ranking the alternatives based on cost-effectiveness and societal net benefit.
No sweeping legislation: Congress should resist efforts to impose licensing requirements or other mandatory pre-approval processes for AI models, as these would create unnecessary hurdles that disproportionately inhibit startups, open source developers, and smaller companies. When states pass sweeping anti-innovation laws governing AI, Congress should consider ways to pre-empt them.
No Department of AI: Proposals to create a new federal agency dedicated to regulating AI would result in bureaucratic meddling that slows the technology’s development. Congress should instead rely on existing regulatory frameworks and ensure rules are up to date to reflect modern technology. Likewise, efforts to establish international AI regulatory bodies, akin to the International Atomic Energy Agency, should be avoided as these will undermine US sovereignty.