Optimizing lightweight neural networks for efficient mobile edge computing.

Journal: Scientific reports
Published Date:

Abstract

In the era of rapid technological advancement, Mobile Edge Computing (MEC) has become essential for supporting latency-sensitive applications such as internet of things, autonomous driving, and smart cities. However, efficient resource allocation remains a challenge due to the dynamic nature of MEC environments. The primary difficulties stem from fluctuating workloads, varying network conditions, and heterogeneous computational capabilities, which make real-time task offloading and resource management complex. Traditional centralized approaches suffer from high computational overhead and poor scalability, while conventional machine learning-based methods often require extensive labeled data and fail to adapt quickly in dynamic settings. To address these issues, this study proposes an advanced Multi-Agent Reinforcement Learning (MARL) framework combined with a lightweight neural network, LtNet, to optimize task offloading and resource management in MEC. MARL enables decentralized decision-making, allowing each device to learn optimal offloading strategies and adapt dynamically. Compared to prior single-agent or heuristic methods, our approach improves scalability and efficiency while reducing computational complexity. LtNet further enhances performance using H-Swish activation and selective Squeeze-and-Excitation modules, ensuring lower computational overhead. Experimental results demonstrate that the proposed methods achieve a 12-22% reduction in task completion time, a 5-8% decrease in energy consumption, and consistently high resource utilization, making them highly effective in managing dynamic MEC environments.

Authors

  • Liu Liu
    Department of Oral and Maxillofacial Radiology, School of Dentistry, Dental Science Research Institute, Chonnam National University, Gwangju, South Korea.
  • Zhifei Xu

Keywords

No keywords available for this article.