Language model-based B cell receptor sequence embeddings can effectively encode receptor specificity.

Journal: Nucleic acids research
PMID:

Abstract

High throughput sequencing of B cell receptors (BCRs) is increasingly applied to study the immense diversity of antibodies. Learning biologically meaningful embeddings of BCR sequences is beneficial for predictive modeling. Several embedding methods have been developed for BCRs, but no direct performance benchmarking exists. Moreover, the impact of the input sequence length and paired-chain information on the prediction remains to be explored. We evaluated the performance of multiple embedding models to predict BCR sequence properties and receptor specificity. Despite the differences in model architectures, most embeddings effectively capture BCR sequence properties and specificity. BCR-specific embeddings slightly outperform general protein language models in predicting specificity. In addition, incorporating full-length heavy chains and paired light chain sequences improves the prediction performance of all embeddings. This study provides insights into the properties of BCR embeddings to improve downstream prediction applications for antibody analysis and discovery.

Authors

  • Meng Wang
    State Key Laboratory of Urban Water Resource and Environment, School of Environment, Harbin Institute of Technology, Harbin 150001, China.
  • Jonathan Patsenker
    From the Departments of Pathology (Irshaid, Garritano, Patsenker, Kluger, Katz, Xu).
  • Henry Li
    Program in Applied Mathematics, Yale University, New Haven, CT, USA.
  • Yuval Kluger
    Department of Pathology, Yale School of Medicine, New Haven, CT 06510, USA.
  • Steven H Kleinstein
    Department of Pathology, Yale School of Medicine, New Haven, CT, USA. steven.kleinstein@yale.edu.