Dual-use capabilities of concern of biological AI models.

Journal: PLoS computational biology
PMID:

Abstract

As a result of rapidly accelerating artificial intelligence (AI) capabilities, multiple national governments and multinational bodies have launched efforts to address safety, security and ethics issues related to AI models. One high priority among these efforts is the mitigation of misuse of AI models, such as for the development of chemical, biological, nuclear or radiological (CBRN) threats. Many biologists have for decades sought to reduce the risks of scientific research that could lead, through accident or misuse, to high-consequence disease outbreaks. Scientists have carefully considered what types of life sciences research have the potential for both benefit and risk (dual use), especially as scientific advances have accelerated our ability to engineer organisms. Here we describe how previous experience and study by scientists and policy professionals of dual-use research in the life sciences can inform dual-use capabilities of AI models trained using biological data. Of these dual-use capabilities, we argue that AI model evaluations should prioritize addressing those which enable high-consequence risks (i.e., large-scale harm to the public, such as transmissible disease outbreaks that could develop into pandemics), and that these risks should be evaluated prior to model deployment so as to allow potential biosafety and/or biosecurity measures. While biological research is on balance immensely beneficial, it is well recognized that some biological information or technologies could be intentionally or inadvertently misused to cause consequential harm to the public. AI-enabled life sciences research is no different. Scientists' historical experience with identifying and mitigating dual-use biological risks can thus help inform new approaches to evaluating biological AI models. Identifying which AI capabilities pose the greatest biosecurity and biosafety concerns is necessary in order to establish targeted AI safety evaluation methods, secure these tools against accident and misuse, and avoid impeding immense potential benefits.

Authors

  • Jaspreet Pannu
    Center for Health Security, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, Maryland.
  • Doni Bloomfield
    School of Law, Fordham University, New York, New York, United States of America.
  • Robert MacKnight
    Department of Chemical Engineering, Carnegie Mellon University, Pittsburg, Pennsylvania, United States of America.
  • Moritz S Hanke
    Center for Health Security, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, Maryland.
  • Alex Zhu
    Center for Health Security, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, Maryland.
  • Gabe Gomes
    Department of Chemical Engineering, Carnegie Mellon University, Pittsburg, Pennsylvania, United States of America.
  • Anita Cicero
    Center for Health Security, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, Maryland.
  • Thomas V Inglesby
    Center for Health Security, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, Maryland.