Skip to main content

Johns Hopkins Center for Health Security responds to NIST RFI on Implementing the Artificial Intelligence Executive Order to Guard Against High-Consequence Bio Risks

Center News

Published

February 6, 2024 – On February 2, the Johns Hopkins Center for Health Security provided input to the National Institute of Standards and Technology (NIST) on guidelines for reducing biological risks from AI models in response to a Request for Information (RFI). Through this RFI, NIST is gathering information to help fulfill duties outlined in the Executive Order on Safe and Trustworthy Artificial Intelligence (AI) issued on October 30, 2023. The order directs NIST to assess AI technologies and create guidelines to ensure secure and reliable AI systems deployment.

Much has been written about how to create these standards for risks such as bias and interpretability, but the Center’s response focused on biological threats from AI models. This response follows a late November 2023 meeting the Center convened with leading AI industry representatives, executive branch officials, and biosecurity experts to discuss efforts the US government and AI developers can make to mitigate potential risks from the convergence of AI and biotechnology, also known as AIxBio.

The Center’s RFI response focused on large language models (LLMs) and biological design tools (BDTs), outlining biosecurity considerations related to generative AI risk management, AI evaluation, and red teaming.

Given the short amount of time NIST has to fulfill its duties in the Executive Order, the Center recommended that NIST first focus on mitigating critical types of high-consequence biological risks by:

  1. Prioritizing the development of guidelines and evaluations that will mitigate high-consequence biological risks.
  2. Developing evaluations that assess the extent to which AI systems increase user access to genetic sequence data that provides information needed to create novel pandemic-capable pathogens and to computational tools, methods, or approaches for shortening the timeline for, lowering the costs of, or decreasing the required sophistication for synthesizing or enhancing pandemic-potential pathogens.
  3. Developing guidelines for AI developers, deployers, and other actors to recognize when and how to mitigate dangerous capabilities identified through AI evaluations and to safely limit access to models with dangerous capabilities.

The Center notes in the RFI that “NIST guidance and priority-setting is especially important because most leading AI groups have not publicly explained how they are or will approach biosecurity testing or prioritize among potential biosecurity threats.” Read the full RFI response.