Accelerated quadratic penalty dynamic approaches with applications to distributed optimization.
Journal:
Neural networks : the official journal of the International Neural Network Society
PMID:
39709646
Abstract
In this paper, we explore accelerated continuous-time dynamic approaches with a vanishing damping α/t, driven by a quadratic penalty function designed for linearly constrained convex optimization problems. We replace these linear constraints with penalty terms incorporated into the objective function, where the penalty coefficient grows to +∞ as t tends to infinity. With appropriate penalty coefficients, we establish convergence rates of O(1/t) for the objective residual and the feasibility violation when α>0, and demonstrate the robustness of these convergence rates against external perturbation. Furthermore, we apply the proposed dynamic approach to three distributed optimization problems: a distributed constrained consensus problem, a distributed extended monotropic optimization, and a distributed optimization with separated equations, resulting in three variant distributed dynamic approaches. Numerical examples are provided to show the effectiveness of the proposed quadratic penalty dynamic approaches.