CEI Comments on NIST’s Request for Information Regarding Security Considerations for Artificial Intelligence Agents

Photo Credit: Getty

RE: Request for Information Regarding Security Considerations for Artificial Intelligence Agents

Docket No.: NIST-2025-0035

The Competitive Enterprise Institute (CEI) appreciates the opportunity to comment on the National Institute of Standards and Technology’s (NIST) Request for Information (RFI) Regarding Security Considerations for Artificial Intelligence Agents. CEI is a non-profit research and advocacy organization that focuses on regulatory policy from a free-market perspective. These comments represent the views of CEI as an organization; they do not purport to represent the views of any individual employee or of any donor.

NIST’s RFI rightly recognizes that AI safety and AI security are inextricably linked. By looking beyond traditional cybersecurity concerns, NIST acknowledges that AI security is a spectrum that ranges from conventional software fixes to complex behavioral safeguards. Consequently, NIST should ensure that its safety guidelines are harmonized with enforcement priorities of other federal agencies.

Antitrust Guidelines for Collaborations Among Competitors

Regarding the questions under section 5, “Additional Considerations,” it is imperative that NIST prioritize interagency alignment with the Department of Justice (DOJ) and Federal Trade Commission (FTC) in updating guidance that affects collaboration on cybersecurity issues involving AI. This applies most urgently to the application of antitrust law to competitor collaborations and information sharing, where the chilling effect of potential enforcement may discourage collective action to address systemic AI agent vulnerabilities.

On February 23, 2026, the FTC and DOJ launched a joint public inquiry for consideration of new Antitrust Guidelines for Collaboration Among Competitors (Collaboration Guidelines), which are due April 24, 2026. Companies have been left “grasping in the dark” since the antitrust agencies withdrew the 2000 Collaboration Guidelines on December 11, 2024, but the DOJ and FTC have a tremendous opportunity to provide clarity for technology companies engaged in the development and deployment of AI systems and infrastructure that power AI agents.

NIST should encourage the FTC and DOJ to consider an antitrust safe harbor for AI safety and security collaborations, because “[f]rontier-model developers would be more likely to collaborate usefully on AI safety and security work if such collaboration were more clearly allowed under the antitrust rules.”

2014 Antitrust Policy Statement on Sharing of Cybersecurity Information

NIST should also work with the FTC and DOJ to update their 2014 Antitrust Policy Statement on Sharing of Cybersecurity Information (2014 Policy Statement). The Policy Statement provides a foundational “rule of reason” framework, making clear that sharing technical cyber threat information is generally procompetitive and unlikely to raise antitrust concerns. While the 2014 Policy Statement was wisely omitted from the 2024 withdrawal of the prior Collaboration Guidelines, the NIST RFI regarding AI agent systems introduces novel risks that require an evolution of this policy.

The 2014 Policy Statement requires a more modern definition of “cyber threat information.” Presently, the Statement defines threat information primarily through technical indicators like file hashes, malicious URLs, and threat signatures. The definition should be expanded to include AI-specific vulnerabilities identified in the RFI, like indirect prompt injection patterns and adversarial training data/poisoning signatures.

Regulatory Sandboxes

As part of NIST’s facilitation of AI safety and security collaboration, the agency should champion regulatory sandboxes to help resolve broader regulatory uncertainty. The Cybersecurity National Lab’s 2025 Regulatory Sandboxes for AI and Cybersecurity outlines how these environments are critical for addressing the “pace problem” of AI regulation. As Kristian Stout, Director of Innovation Policy at the International Center for Law & Economics, explained in prior comments to the National Telecommunications and Information Administration,

AI regulators are charged with overseeing a dynamic and rapidly developing market, and should therefore avoid erecting a rigid framework that force new innovations into ill-fitting categories. The “regulatory sandbox” may provide a better model to balance innovation with risk management. By allowing developers to test and refine AI technologies in a controlled environment under regulatory oversight, sandboxes can be used to help identify and address potential issues before wider deployment, all while facilitating dialogue between innovators and regulators. This approach not only accelerates the development of safe and ethical AI solutions, but also builds mutual understanding and trust. Where possible, NTIA should facilitate policy experimentation with regulatory sandboxes in the AI context.

Regulatory sandboxes serve as a “hybrid governance tool” for emerging technologies that can provide a controlled framework to develop, train, validate, and test new products under regulatory supervision. These could provide the types of modified environments for implementing undoes, rollbacks, and negations mentioned in question 4(b) of NIST’s RFI.

Respectfully submitted,

Alex R. Reinauer

Research Fellow

Competitive Enterprise Institute

[email protected]