“Pandemic drones,” tracing of individuals, and other measures involving artificial intelligence (AI) and privacy are in the news with respect to managing individuals’ reentry into the economy in the wake of the coronavirus outbreak.
As it happens, I had filed a comment on AI and machine learning governance to the White House Office of Management and Budget on this issue in February, before the crisis broke loose. The COVID-19 outbreak presents relevant concerns with respect to new authority potentially being handed to agencies, whose agendas may not precisely align with individual rights or privacy concerns.
The regulatory filing has been available on the official federal regulations.gov website and is now also available as a working paper on Social Science Research Network and ResearchGate.
The full title is a mouthful, given that it’s a filing in response to a government guidance document:
Artificial Intelligence Will Merely Kill Us, Not Take Our Jobs: Working Paper and Comments of the Competitive Enterprise Institute to the Executive Office of the President, Office of Management and Budget on the Request for Comments on a Draft Memorandum to the Heads of Executive Departments and Agencies, “Guidance for Regulation of Artificial Intelligence Applications.”
Here’s the abstract:
A prominent recent development in governance of artificial intelligence is the White House Office of Management and Budget’s 2020 Guidance for Regulation of Artificial Intelligence Applications (request for comments at Federal Register Doc. 2020–00261, Filed 1–10–20), directed at heads of federal executive branch agencies, in fulfillment of Trump E.O. 13859 and building upon it.
While the directive strikes the right tone in some instances, it opens the door to federal agencies too widely in terms of granting them new oversight authority. Top-down federal funding and governance of scientific and technology research will be increasingly incompatible with a future of lightly regulated science and technology specifically, and with limited government generally.
First is the lack of concern that a key driver of investment (and in turn trajectory) of the AI sector is military and intelligence-based applications. Not to be flip about it, but when it comes to robotics and military, the concern is that Isaac Asimov’s widely famous (but of course fictional) Laws of Robotics (devised to forbid the harm of humans) are programmed out, not in. This is a part of what makes fusion of big government and private AI deployment problematic at the outset.
Second, the guidance shows insufficient appreciation of the extent to which the broader regulatory bureaucracy at which it is aimed holds a vision of “rule of law” entirely at odds with a more classical liberal frame of reference. Today’s administrative state has its own set of value pursuits and visions, of what are costs and what are benefits, and the sources of each.
Make no mistake, the new AI guidance constitutes a set of regulatory principles rather than laissez-faire, property rights-based principles, especially as they will be interpreted by less market-oriented administrations and the agencies they direct assume the helm.
For example, the White House guidance invites agencies to “consider whether a change in regulatory policy is needed due to the adoption of AI applications in an already regulated industry, or due to the development of substantially new industries facilitated by AI.” But this ignores the simple reality that, unless externally restrained, a regulatory bureaucracy’s inclination is to answer the question, “Is there call for regulation?” in the affirmative. This guidance does not restrain that impulse but adds to it. For example:
- Agencies are instructed to produce “Statutory Authorities Directing or Authorizing Agency Regulation of AI Applications” and to “List and describe any statutes that direct or authorize your agency to issue regulations specifically on the development and use of AI applications.” But no statutory definition of AI even existed at the time any such supposed predicates came to be.
- The guidance is favorable toward antitrust regulation, and like all such sentiments, ignores government’s larger-than-monopoly influence.
- The guidance invites social policy regulatory experimentation.
- The guidance may undermine simultaneous reforms from the Trump administration to restrain the overuse by agencies of this very sort of sub-regulatory guidance to influence the private sector.
Given public policy fears, real and imagined, over artificial intelligence, it is an area ripe for political predation. While big science need not entail big government, the alignment of forces in play at the moment implies that it likely will. Relatedly now, the fusion of public and private databases amplified by AI presents new concerns in the management of COVID-19.