Zero-shot performance analysis of large language models in sumrate maximization.

Journal: PloS one
Published Date:

Abstract

Large language models have revolutionized the field of natural language processing and are now becoming a one-stop solution to various tasks. In the field of Networking, LLMs can also play a major role when it comes to resource optimization and sharing. While Sumrate maximization has been a crucial factor for resource optimization in the networking domain, the optimal or sub-optimal algorithms it requires can be cumbersome to comprehend and implement. An effective solution is leveraging the generative power of LLMs for such tasks where there is no necessity for prior algorithmic and programming knowledge. A zero-shot analysis of these models is necessary to define the feasibility of using them in such tasks. Using different combinations of total cellular users and total D2D pairs, our empirical results suggest that the maximum average efficiency of these models for sumrate maximization in comparison to state-of-the-art approaches is around 58%, which is obtained using GPT. The experiment also concludes that some variants of the large language models currently in use are not suitable for numerical and structural data without fine-tuning their parameters.

Authors

  • Ali Abir Shuvro
    Department of Computer Science and Engineering, Islamic University of Technology, Gazipur, Dhaka, Bangladesh.
  • Md Shahriar Islam Bhuiyan
    Department of Computer Science and Engineering, Islamic University of Technology, Gazipur, Dhaka, Bangladesh.
  • Faisal Hussain
    Al-Khawarizmi Institute of Computer Science (KICS), University of Engineering & Technology (UET), 54890 Lahore, Pakistan. Electronic address: faisal.hussain.engr@gmail.com.
  • Md Sakhawat Hossen
    Department of Computer Science and Engineering, Islamic University of Technology, Gazipur, Dhaka, Bangladesh.