NIST AI Guidelines Misplace Responsibility For Managing Risks

Photo Credit: Getty

Policymakers are scrambling to keep pace with technological advancements in artificial intelligence. The recent release of draft guidelines from the U.S. AI Safety Institute, a newly-created office within the National Institute of Standards and Technology (NIST), are the latest example of government struggling to keep up. Like with so many policies emerging from President Biden’s 2023 Executive Order on AI, the government cure may be worse than the AI disease.

NIST is a well-respected agency known for setting standards across a variety of industries. In its document, “Managing misuse risks in dual-use foundation models,” the agency has proposed a set of seven objectives for managing AI misuse risks. These range from anticipating potential misuse to ensuring transparency in risk management practices. While technically non-binding, NIST guidelines can find their way into binding legislation. For instance, California’s SB 1047 AI legislation references NIST standards, and other states are likely to follow suit.

This is problematic because the proposed guidelines have some significant shortcomings that should be addressed before this document is finalized. A primary concern is the guidelines’ narrow focus on initial developers of foundation models, seemingly overlooking the roles of downstream developers, deployers, and users in managing risks.

This approach places an enormous burden on model developers to anticipate and possibly mitigate every conceivable risk. The guidelines themselves acknowledge the difficulty of this task in the “challenges” section.

The proposed risk measurement framework asks developers to create detailed threat profiles for different actors, estimate the scale and frequency of potential misuse, and assess impacts. These are tasks that even national security agencies struggle to do effectively. This level of analysis for each model iteration could significantly slow down AI development and deployment.

The danger is that these risk analyses will become a lever that regulators use to impose an overly cautious approach to AI development and innovation. We’ve seen similar precautionary logic embedded in environmental policy, such as the National Environmental Policy Act, which has often hindered economic growth and progress.

The guidelines seem to overlook the distributed nature of risk management in AI ecosystems. Different risks are best addressed by different actors at various stages of the AI lifecycle. Some risks can be mitigated by model developers, others by end-users or intermediary companies integrating AI into their products. In some cases, ex-post legal liability regimes might provide the most effective incentives for responsible AI use.

Read more at Forbes