14.06.2019 | Original Paper | Ausgabe 1/2020

Regularized dual gradient distributed method for constrained convex optimization over unbalanced directed graphs
- Zeitschrift:
- Numerical Algorithms > Ausgabe 1/2020
Wichtige Hinweise
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Abstract
This paper investigates a distributed optimization problem over a cooperative multi-agent time–varying network, where each agent has its own decision variables that should be set so as to minimize its individual objective subjected to global coupled constraints. Based on push-sum protocol and dual decomposition, we design a regularized dual gradient distributed algorithm to solve this problem, in which the algorithm is implemented in unbalanced time–varying directed graphs only requiring the column stochasticity of communication matrices. By augmenting the corresponding Lagrangian function with a quadratic regularization term, we first obtain the bound of the Lagrangian multipliers which does not require constructing a compact set containing the dual optimal set when compared with most of primal-dual based methods. Then, we obtain that the convergence rate of the proposed method can achieve the order of \(\mathcal {O}(\ln T/T)\) for strongly convex objective functions, where T is the number of iterations. Moreover, the explicit bound of constraint violations is also given. Finally, numerical results on the network utility maximum problem are used to demonstrate the efficiency of the proposed algorithm.