OMB Proposed Memorandum on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.

December 5, 2023

Docket ID: OMB–2023–0020

Proposed Memorandum for the Heads of Executive Departments and Agencies: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.

To Shalanda D. Young, Director of the Office of Management and Budget:

The Competitive Enterprise Institute (CEI) is a non-profit public interest organization committed to advancing the principles of free markets and limited government. CEI has a longstanding interest in applying these principles to the rulemaking process and has frequently commented on issues related to oversight of rulemaking and the regulatory process. On behalf of CEI, I am pleased to provide comments to the Office of Management and Budget (OMB) on its proposed draft memorandum titled “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.”

Background and Recommendations

The draft memorandum lays out proposed requirements and recommendations for federal agencies to govern, innovate with, and manage risks from artificial intelligence (AI), particularly where AI impacts rights and safety. It calls for agencies to designate Chief AI Officers, develop AI strategies, and implement minimum practices for rights-impacting and safety-impacting AI systems by August of 2024. These practices include conducting AI impact assessments, collecting representative data, imposing non-discrimination safeguards, gathering stakeholder feedback, and implementing ongoing monitoring and transparency measures. While the draft memo takes some proactive steps in the direction of responsible and safe governmental use of AI, OMB can further improve its approach to AI risk management by more formally incorporating risk analysis methods into agencies’ implementation plans. As explained in the attached book chapter, risk analysis is a powerful tool for comparing a policy or

regulation’s target risk reductions to the countervailing risk increases that stem from the opportunity costs of these same policy actions. Opportunity costs essentially apply to any regulation that entails a positive cost.

The chapter, written by Salisbury University Professor Dustin Chambers and me, focuses on mortality risks specifically and outlines a method by which analysts can conduct robust risk analysis in ten steps. While the chapter focuses on policies in the state of Wisconsin, the lessons are relevant to any jurisdiction.

The ten steps of robust mortality risk analysis are:

  1. Evaluate if the policy saves lives directly.
  2. Estimate the number of lives saved, if applicable.
  3. Calculate the accounting costs over time.
  4. Identify any corresponding cost savings, to calculate net accounting cost.
  5. Determine the opportunity cost, based on what would have been consumed vs. invested from the accounting costs.
  6. Calculate the net present value of costs net of cost savings.
  7. Calculate the cost-effectiveness as the net present value of costs, divided by lives saved.
  8. Compare the cost-per-life-saved to the “value of an induced death” to see if the policy increases or reduces mortality risk immediately.
  9. Determine if the policy will increase risk at some point in the future.
  10. Make a table tracking costs, lives saved, and risk over time, given risk reversals are common.

This approach allows policymakers to quantify mortality impacts and identify regulations or other policies that may increase overall mortality risk. By incorporating this analysis into AI strategies and assessments, agencies can better evaluate the tradeoffs they face and demonstrate whether their AI systems are net beneficial or in other words net risk reducing.

Following these recommendations is a practical way to implement OMB’s requirements for AI impact assessments as well as to comply with standards for AI risk management, such as those outlined from the National Institute of Standards and Technology. While these documents include some commonly-accepted principles, they leave unexplained many of the specifics of how risks are to be analyzed. The inputs needed in this kind of analysis are modest—really just the costs and the estimated lives saved from the policy. While no doubt other risks can arise unrelated to mortality, few would argue against mortality risks being some of the most important risks affected by

government actions. Importantly, however, sound risk analysis can only be conducted if the opportunity cost of government policies is properly considered.

Conclusion

I urge OMB to direct agencies to consult the attached book chapter as they develop implementation plans for the memorandum. Few resources will offer such specific and easy-to-follow instructions for AI impact assessments as this guide.

With personnel dedicated to the task, agencies’ compliance with the proposed draft memorandum could integrate mortality risk analysis without much difficulty. This would significantly strengthen OMB’s approach to ensuring AI safety across government.

I appreciate your consideration of these comments.

Sincerely,

James Broughel, PhD

Senior Fellow, Competitive Enterprise Institute