RLSynC: Offline-Online Reinforcement Learning for Synthon Completion.

Journal: Journal of chemical information and modeling
Published Date:

Abstract

Retrosynthesis is the process of determining the set of reactant molecules that can react to form a desired product. Semitemplate-based retrosynthesis methods, which imitate the reverse logic of synthesis reactions, first predict the reaction centers in the products and then complete the resulting synthons back into reactants. We develop a new offline-online reinforcement learning method RLSynC for synthon completion in semitemplate-based methods. RLSynC assigns one agent to each synthon, all of which complete the synthons by conducting actions step by step in a synchronized fashion. RLSynC learns the policy from both offline training episodes and online interactions, which allows RLSynC to explore new reaction spaces. RLSynC uses a standalone forward synthesis model to evaluate the likelihood of the predicted reactants in synthesizing a product and thus guides the action search. Our results demonstrate that RLSynC can outperform state-of-the-art synthon completion methods with improvements as high as 14.9%, highlighting its potential in synthesis planning.

Authors

  • Frazier N Baker
    Department of Computer Science and Engineering, College of Engineering, The Ohio State University, Columbus, Ohio 43210, United States.
  • Ziqi Chen
    Shanghai Key Laboratory of New Drug Design, School of Pharmacy, East China University of Science & Technology, Shanghai, 200237, China.
  • Daniel Adu-Ampratwum
    Division of Medicinal Chemistry and Pharmacognosy, College of Pharmacy, The Ohio State University, Columbus, Ohio 43210, United States.
  • Xia Ning
    Department of Biomedical Informatics, the Department of Computer Science and Engineering, and the Translational Data Analytics Institute, The Ohio State University, Columbus, OH, 43210.