Mitigating epidemic spread in complex networks based on deep reinforcement learning.

Journal: Chaos (Woodbury, N.Y.)
PMID:

Abstract

Complex networks are susceptible to contagious cascades, underscoring the urgency for effective epidemic mitigation strategies. While physical quarantine is a proven mitigation measure for mitigation, it can lead to substantial economic repercussions if not managed properly. This study presents an innovative approach to selecting quarantine targets within complex networks, aiming for an efficient and economic epidemic response. We model the epidemic spread in complex networks as a Markov chain, accounting for stochastic state transitions and node quarantines. We then leverage deep reinforcement learning (DRL) to design a quarantine strategy that minimizes both infection rates and quarantine costs through a sequence of strategic node quarantines. Our DRL agent is specifically trained with the proximal policy optimization algorithm to optimize these dual objectives. Through simulations in both synthetic small-world and real-world community networks, we demonstrate the efficacy of our strategy in controlling epidemics. Notably, we observe a non-linear pattern in the mitigation effect as the daily maximum quarantine scale increases: the mitigation rate is most pronounced at first but plateaus after reaching a critical threshold. This insight is crucial for setting the most effective epidemic mitigation parameters.

Authors

  • Jie Yang
    Key Laboratory of Development and Maternal and Child Diseases of Sichuan Province, Department of Pediatrics, Sichuan University, Chengdu, China.
  • Wenshuang Liu
    School of Automation, Beijing Institute of Technology, Beijing 100081, China.
  • Xi Zhang
    The First Clinical Medical College, Guangxi University of Chinese Medicine, Nanning 530001, China.
  • Choujun Zhan
    School of Computer, South China Normal University, Guangzhou 510631, China. Electronic address: zchoujun2@gmail.com.