Inference Time of a CamemBERT Deep Learning Model for Sentiment Analysis of COVID Vaccines on Twitter.

Journal: Studies in health technology and informatics
Published Date:

Abstract

In previous work, we implemented a deep learning model with CamemBERT and PyTorch, and built a microservices architecture using the TorchServe serving library. Without TorchServe, inference time was three times faster when the model was loaded once in memory compared when the model was loaded each time. The preloaded model without TorchServe presented comparable inference time with the TorchServe instance. However, using a PyTorch preloaded model in a web application without TorchServe would necessitate to implement functionalities already present in TorchServe.

Authors

  • Guillaume Guerdoux
    Geegz, Paris, France.
  • Théophile Tiffet
    Unit of Public health, University hospital of Saint-Etienne, France.
  • Cédric Bousquet
    Sorbonne Université, INSERM, Université Paris 13, LIMICS, Paris, France.