Artificial intelligence technology seems poised to transform our lives in both beneficial and detrimental ways. A new Competitive Enterprise Institute report offers a thoughtful, fact-driven framework for future regulation of AI aimed at heading off a fear-driven rush to regulate that would diminish potential gains.
“The benefits of AI development are immense, with the potential to reshape sectors of our economy and society in countlessly helpful ways,” said James Broughel, author of the report, Rules for Robots – A framework for governance of AI.
“But as with other innovations throughout human history, AI poses risks, such as with data security, privacy, discrimination, and job loss,” Broughel continued. “Fear and uncertainty also present a danger – of hasty, poorly planned regulation that creates more problems than it solves. We must have a thoughtful framework for deciding on whether and how to regulate all the many uses of AI going forward.”
Broughel proposes a set of considerations for policy makers to impose as conditions for regulation. These include that proponents of regulation:
- Demonstrate a problem exists that is systemic in nature and is not going away;
- Propose solutions likely to achieve the desired policy outcome and not too broad or applied nonsensically to too many industries and applications;
- Consider alternatives to regulation, including market-based solutions and deregulation.
In the report, Broughel walks through the main benefits and risks posed by AI in a few key sectors, such as medicine, transportation, and national defense, demonstrating how each AI application has a unique set of circumstances that policymakers must take into account. Overly-prescriptive regulation should be avoided while, when necessary, performance-based regulation can be used that encourages private sector problem-solving.
View the report, Rules for Robots – A framework for governance of AI, by James Broughel