Skip to main content
Log in

Iteration-complexity analysis of a generalized alternating direction method of multipliers

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

This paper analyzes the iteration-complexity of a generalized alternating direction method of multipliers (G-ADMM) for solving separable linearly constrained convex optimization problems. This ADMM variant, first proposed by Bertsekas and Eckstein, introduces a relaxation parameter \(\alpha \) into the second ADMM subproblem in order to improve its computational performance. It is shown that, for a given tolerance \(\varepsilon >0\), the G-ADMM with \(\alpha \in (0, 2)\) provides, in at most \({\mathcal {O}}(1/\varepsilon ^2)\) iterations, an approximate solution of the Lagrangian system associated to the optimization problem under consideration. It is further demonstrated that, in at most \({\mathcal {O}}(1/\varepsilon )\) iterations, an approximate solution of the Lagrangian system can be obtained by means of an ergodic sequence associated to a sequence generated by the G-ADMM with \(\alpha \in (0, 2]\). Our approach consists of interpreting the G-ADMM as an instance of a hybrid proximal extragradient framework with some special properties. Some preliminary numerical experiments are reported in order to confirm that the use of \(\alpha >1\) can lead to a better numerical performance than \(\alpha =1\) (which corresponds to the standard ADMM).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. An operator \(T:\mathbb {R}^s\rightrightarrows \mathbb {R}^s\) is said to be monotone if \( \langle {z-z'},{s-s'}\rangle \ge 0\), for every \(z,z'\in \mathbb {R}^s,\)\(s\in T(z)\) and \(s'\in T(z')\). Moreover, T is maximal monotone if it is monotone and, additionally, if S is a monotone operator such that \(T(z)\subset S(z)\) for every \(z\in \mathbb {R}^s\) then \(T=S\).

References

  1. Bertsekas, D.P.: Constrained Optimization and Lagrange Multiplier Methods. Academic Press, New York (1982)

    MATH  Google Scholar 

  2. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)

    Article  MATH  Google Scholar 

  3. Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40(1), 120–145 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  4. Corman, E., Yuan, X.: A generalized proximal point algorithm and its convergence rate. SIAM J. Optim. 24(4), 1614–1638 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  5. Cui, Y., Li, X., Sun, D., Toh, K.C.: On the convergence properties of a majorized ADMM for linearly constrained convex optimization problems with coupled objective functions. J. Optim. Theory Appl. 169(3), 1013–1041 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  6. Deng, W., Yin, W.: On the global and linear convergence of the generalized alternating direction method of multipliers. J. Sci. Comput. 66(3), 889–916 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  7. Eckstein, J.: Parallel alternating direction multiplier decomposition of convex programs. J. Optim. Theory Appl. 80(1), 39–62 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  8. Eckstein, J.: Some saddle-function splitting methods for convex programming. Optim. Methods Softw. 4(1), 75–83 (1994)

    Article  Google Scholar 

  9. Eckstein, J., Bertsekas, D.P.: On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Progr. 55(3 Ser. A), 293–318 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  10. Fang, E.X., He, B., Liu, H., Yuan, X.: Generalized alternating direction method of multipliers: new theoretical insights and applications. Math. Prog. Comput. 7(2), 149–187 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  11. Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput. Math. Appl. 2, 17–40 (1976)

    Article  MATH  Google Scholar 

  12. Glowinski, R.: Numerical Methods for Nonlinear Variational Problems. Springer Series in Computational Physics. Springer, Berlin (1984)

    Book  Google Scholar 

  13. Glowinski, R., Marroco, A.: Sur l’approximation, par éléments finis d’ordre un, et la résolution, par penalisation-dualité, d’une classe de problèmes de Dirichlet non linéaires. RAIRO Anal. Numér. 9, 41–76 (1975)

    MATH  Google Scholar 

  14. Gonçalves, M.L.N., Alves, M.M., Melo, J.G.: Pointwise and ergodic convergence rates of a variable metric proximal alternating direction method of multipliers. J. Optim. Theory Appl. 177(2), 448–478 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  15. Gonçalves, M.L.N., Melo, J.G., Monteiro, R.D.C.: Extending the ergodic convergence rate of the proximal ADMM. arXiv preprint arXiv:1611.02903 (2016)

  16. Gonçalves, M.L.N., Melo, J.G., Monteiro, R.D.C.: Improved pointwise iteration-complexity of a regularized ADMM and of a regularized non-euclidean HPE framework. SIAM J. Optim. 27(1), 379–407 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  17. Gu, Y., Jiang, B., Deren, H.: A semi-proximal-based strictly contractive Peaceman–Rachford splitting method. arXiv preprint arXiv:1506.02221 (2015)

  18. Hager, W.W., Yashtini, M., Zhang, H.: An \({O}(1/k)\) convergence rate for the variable stepsize Bregman operator splitting algorithm. SIAM J. Numer. Anal. 54(3), 1535–1556 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  19. He, B., Yuan, X.: On the \(\cal{O}(1/n)\) convergence rate of the Douglas–Rachford alternating direction method. SIAM J. Numer. Anal. 50(2), 700–709 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  20. He, B., Yuan, X.: On non-ergodic convergence rate of Douglas–Rachford alternating direction method of multipliers. Numer. Math. 130(3), 567–577 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  21. Lin, T., Ma, S., Zhang, S.: An extragradient-based alternating direction method for convex minimization. Found. Comput. Math. 17(1), 17–35 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  22. Liu, J., Duan, Y., Sun, M.: A symmetric version of the generalized alternating direction method of multipliers for two-block separable convex programming. J. Inequal. Appl. 2017(1), 129 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  23. Monteiro, R.D.C., Svaiter, B.F.: On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean. SIAM J. Optim. 20(6), 2755–2787 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  24. Monteiro, R.D.C., Svaiter, B.F.: Iteration-complexity of block-decomposition algorithms and the alternating direction method of multipliers. SIAM J. Optim. 23(1), 475–507 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  25. Nishihara, R., Lessard, L., Recht, B., Packard, A., Jordan, M.I.: A general analysis of the convergence of ADMM. arXiv preprint arXiv:1502.02009 (2015)

  26. Ouyang, Y., Chen, Y., Lan, G., Pasiliao Jr., E.: An accelerated linearized alternating direction method of multipliers. SIAM J. Imaging Sci. 8(1), 644–681 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  27. Solodov, M.V., Svaiter, B.F.: A hybrid approximate extragradient-proximal point algorithm using the enlargement of a maximal monotone operator. Set-Valued Anal. 7(4), 323–345 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  28. Sun, H.: Analysis of fully preconditioned ADMM with relaxation in Hilbert spaces. arXiv preprint arXiv:1611.04801 (2016)

  29. Tao, M., Yuan, X.: On the optimal linear convergence rate of a generalized proximal point algorithm. J. Sci. Comput. 74(2), 826–850 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  30. Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B 58(1), 267–288 (1996)

    MathSciNet  MATH  Google Scholar 

  31. Tibshirani, R.J.: The lasso problem and uniqueness. Electron. J. Stat. 7, 1456–1490 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  32. Wang, X., Yuan, X.: The linearized alternating direction method of multipliers for Dantzig selector. SIAM J. Sci. Comput. 34(5), 2792–2811 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  33. Yang, J., Yuan, X.: Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization. Math. Comput. 82(281), 301–329 (2013)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. L. N. Gonçalves.

Additional information

The work of these authors was supported in part by CNPq Grants 302666/2017-6, 406975/2016-7 and CAPES.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Adona, V.A., Gonçalves, M.L.N. & Melo, J.G. Iteration-complexity analysis of a generalized alternating direction method of multipliers. J Glob Optim 73, 331–348 (2019). https://doi.org/10.1007/s10898-018-0697-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-018-0697-z

Keywords

Mathematics Subject Classification

Navigation