Promises and perils of using Transformer-based models for SE research.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

Many Transformer-based pre-trained models for code have been developed and applied to code-related tasks. In this paper, we analyze 519 papers published on this topic during 2017-2023, examine the suitability of model architectures for different tasks, summarize their resource consumption, and look at the generalization ability of models on different datasets. We examine three representative pre-trained models for code: CodeBERT, CodeGPT, and CodeT5, and conduct experiments on the four topmost targeted software engineering tasks from the literature: Bug Fixing, Bug Detection, Code Summarization, and Code Search. We make four important empirical contributions to the field. First, we demonstrate that encoder-only models (CodeBERT) can outperform encoder-decoder models for general-purpose coding tasks, and showcase the capability of decoder-only models (CodeGPT) for certain generation tasks. Second, we study the most frequently used model-task combinations in the literature and find that less popular models can provide higher performance. Third, we find that CodeBERT is efficient in understanding tasks while CodeT5's efficiency is unreliable on generation tasks due to its high resource consumption. Fourth, we report on poor model generalization for the most popular benchmarks and datasets on Bug Fixing and Code Summarization tasks. We frame our contributions in terms of promises and perils, and document the numerous practical issues in advancing future research on transformer-based models for code-related tasks.

Authors

  • Yan Xiao
    NHC Key Laboratory of Systems Biology of Pathogens and Christophe Merieux Laboratory, Institute of Pathogen Biology, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100730, P. R. China.
  • Xinyue Zuo
    National University of Singapore, Computing 1, 13 Computing Drive, 117417, Singapore.
  • Xiaoyue Lu
    Shenzhen Campus of Sun Yat-sen University, No. 66, Gongchang Road, Guangming District, Shenzhen, 518107, Guangdong, China.
  • Jin Song Dong
    National University of Singapore, Computing 1, 13 Computing Drive, 117417, Singapore.
  • Xiaochun Cao
    School of Computer Science and Technology, Tianjin University, Tianjin 300072, China; State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100093, China.
  • Ivan Beschastnikh
    The University of British Columbia, Vancouver and Okanagan, BC, Canada.