A brief look at the Senate’s proposed AI regulations
As new artificial intelligence (AI) models release and their capabilities grow, fears around artificial intelligence have begun to crop up as people wonder what AI can and will do. Until recently, Congress focused on proposing several competing bills that addressed one set of issues or another instead of releasing any comprehensive AI policy plans. That all changed last month, when the Bipartisan Senate AI Working Group released a brief roadmap for Senate-level AI Policy.
At first glance, the roadmap seems light on direct regulatory suggestions when compared to more alarmist rhetoric on AI coming from some members of Congress. It homes in on areas like funding AI innovation, helping potentially displaced workers, and using AI in government. But the roadmap still reveals multiple proposals that may affect regulatory efforts.
The AI Working Group “Supports a report from the Comptroller of the United States to identify any significant statutes and regulations that affect the innovation of artificial intelligence systems.” This suggestion has significant implications for AI regulation. The Comptroller leads the Government Accountability Office, which can utilize its extensive resources and status as a government agency to compile an exhaustive list of different regulations faster than any non-governmental organization. Such a list could prove invaluable in reducing AI-specific regulatory barriers, as it would allow interested parties to bypass the need to identify harmful policies and move straight to advocating for their removal.
Additionally, the Working Group encourages “Efforts to ensure that stakeholders – from innovators and employers to civil society, unions, and other workforce perspectives – are consulted as AI is developed and then deployed by end users.” One of the biggest fears related to AI is that it could replace hundreds of thousands of jobs around the country and displace workers, though many argue it will instead create new jobs. The proposal to consider “unions, and other workforce perspectives” while developing and deploying AI hints that future regulation may seek to curtail companies from using AI as a substitute for labor. This would be a grave misstep. By limiting the ways firms can use AI as an input, we’ll preserve now-redundant jobs and miss out on some of the wealth-generating productivity growth AI can bring.
The Senate AI Working Group also suggests that relevant committees “Review whether other potential uses for AI should be either extremely limited or banned.” Of particular interest here is the extreme vagueness of the phrase “potential uses.” While this could be directed at the faint chance of developing General Artificial Intelligence, which would be far closer to humans in capability than any existing AI, it could also mean that many potential uses of current may be curtailed or prevented. This would likely lead to mass lobbying from industries seeking to protect themselves from superior, AI-powered competitors. Such lobbying would undoubtedly reduce the gains from AI while likely denying consumers and producers better, cheaper alternatives.
The Working Group goes on to recommend that committees “Evaluate whether there is a need for best practices for the level of automation that is appropriate for a given type of task, considering the need to have a human in the loop at certain stages for some high impact tasks.” While the use of “high impact task” suggests this would only be for a few situations, the scope could also be much broader. For example, many automatable transportation tasks (like trucking and passenger transit) could require human oversight based on what’s being transported. Such human oversight may not result in greater safety, and could in fact be more dangerous, while also continuing to divert people into non-productive, unnecessary jobs instead of new, value-creating ones.
Finally, the Working Group urges committees to “Develop a framework for determining when, or if, export controls should be placed on powerful AI systems” and
Work with the executive branch to support the free flow of information across borders, protect against the forced transfer of American technology, and promote open markets for digital goods exported by American creators and businesses through agreements that also allow countries to address concerns regarding security, privacy, surveillance, and competition.
Taken together, these two quotations suggest AI policy could have opposing effects on the liberalization of US trade. The placement of export controls on AI would represent a total blocking of AI trade and could result in worldwide retaliatory barriers that would stifle the sharing of AI technology. At the same time, the promotion of open markets and the free flow of information suggest a potential for greater liberalization in international trade, though the language about addressing various concerns implies that liberalization will be limited.
These two proposals also have large implications on the “splinternet,” a term coined to refer to the balkanization of the internet into separate sections that don’t share information with each other. A push to ensure open information would combat this trend, but potential divergence between the US and other nations over AI – due to both export controls limiting AI promulgation and differences in regulatory regimes – could instead split the “splinternet” even further.
The Senate’s new AI roadmap presents a mixed bag on AI regulation. It contains an opportunity for real change, giving concerned citizens and organizations the ability to shape policy. However, policymakers should tread carefully and not limit the AI tools that will promote efficiency and improve Americans’ quality of life.