Skip to main content
Top

2020 | OriginalPaper | Chapter

3. Accelerated Algorithms for Constrained Convex Optimization

Authors : Zhouchen Lin, Huan Li, Cong Fang

Published in: Accelerated Optimization for Machine Learning

Publisher: Springer Singapore

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

This chapter reviews the representative accelerated algorithms for deterministic constrained convex optimization. We overview the accelerated penalty method, accelerated Lagrange multiplier method, and the accelerated augmented Lagrange multiplier method. In particular, we concentrate on two widely used algorithms, namely the alternating direction method of multiplier (ADMM) and the primal-dual method. For ADMM, we study four scenarios, namely the generally convex and nonsmooth case, the strongly convex and nonsmooth case, the generally convex and smooth case, and the strongly convex and smooth case. We also introduce its non-ergodic accelerated variant. For the primal-dual method, we study three scenarios: both the two functions are generally convex, both are strongly convex, and one is generally convex, while the other is strongly convex. Finally, we introduce the Frank–Wolfe algorithm under the condition of strongly convex constraint set.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Footnotes
1
The four cases in Sects. 3.4.13.4.4 assume different conditions. They are not accelerated and cannot compare with each other. However, the convergence rate in Sect. 3.4.5 is truly accelerated.
 
2
In fact, the faster rate is due to the stronger assumption, i.e., the strong convexity of g, rather than the acceleration technique.
 
3
Similar to Sect. 3.4.2, we have the faster rate due to the stronger assumption, rather than the acceleration technique.
 
Literature
1.
go back to reference D.P. Bertsekas, Nonlinear Programming, 2nd edn. (Athena Scientific, Belmont, MA, 1999) D.P. Bertsekas, Nonlinear Programming, 2nd edn. (Athena Scientific, Belmont, MA, 1999)
2.
go back to reference S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)CrossRef S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)CrossRef
3.
go back to reference A. Chambolle, T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imag. Vis. 40(1), 120–145 (2011)MathSciNetCrossRef A. Chambolle, T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imag. Vis. 40(1), 120–145 (2011)MathSciNetCrossRef
4.
go back to reference A. Chambolle, T. Pock, On the ergodic convergence rates of a first-order primal-dual algorithm. Math. Program. 159(1–2), 253–287 (2016)MathSciNetCrossRef A. Chambolle, T. Pock, On the ergodic convergence rates of a first-order primal-dual algorithm. Math. Program. 159(1–2), 253–287 (2016)MathSciNetCrossRef
5.
go back to reference Y. Chen, G. Lan, Y. Ouyang, Optimal primal-dual methods for a class of saddle point problems. SIAM J. Optim. 24(4), 1779–1814 (2014)MathSciNetCrossRef Y. Chen, G. Lan, Y. Ouyang, Optimal primal-dual methods for a class of saddle point problems. SIAM J. Optim. 24(4), 1779–1814 (2014)MathSciNetCrossRef
6.
go back to reference D. Davis, W. Yin, Convergence rate analysis of several splitting schemes, in Splitting Methods in Communication, Imaging, Science, and Engineering (Springer, New York, 2016), pp. 115–163CrossRef D. Davis, W. Yin, Convergence rate analysis of several splitting schemes, in Splitting Methods in Communication, Imaging, Science, and Engineering (Springer, New York, 2016), pp. 115–163CrossRef
7.
go back to reference E. Esser, X. Zhang, T.F. Chan, A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science. SIAM J. Imag. Sci. 3(4), 1015–1046 (2010)MathSciNetCrossRef E. Esser, X. Zhang, T.F. Chan, A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science. SIAM J. Imag. Sci. 3(4), 1015–1046 (2010)MathSciNetCrossRef
8.
9.
go back to reference D. Garber, E. Hazan, Faster rates for the Frank-Wolfe method over strongly-convex sets, in Proceedings of the 32nd International Conference on Machine Learning, Lille, (2015), pp. 541–549 D. Garber, E. Hazan, Faster rates for the Frank-Wolfe method over strongly-convex sets, in Proceedings of the 32nd International Conference on Machine Learning, Lille, (2015), pp. 541–549
10.
go back to reference P. Giselsson, S. Boyd, Linear convergence and metric selection in Douglas Rachford splitting and ADMM. IEEE Trans. Automat. Contr. 62(2), 532–544 (2017)CrossRef P. Giselsson, S. Boyd, Linear convergence and metric selection in Douglas Rachford splitting and ADMM. IEEE Trans. Automat. Contr. 62(2), 532–544 (2017)CrossRef
12.
go back to reference B. He, X. Yuan, On the O(1∕t) convergence rate of the Douglas-Rachford alternating direction method. SIAM J. Numer. Anal. 50(2), 700–709 (2012)MathSciNetCrossRef B. He, X. Yuan, On the O(1∕t) convergence rate of the Douglas-Rachford alternating direction method. SIAM J. Numer. Anal. 50(2), 700–709 (2012)MathSciNetCrossRef
13.
go back to reference B. He, X. Yuan, On non-ergodic convergence rate of Douglas-Rachford alternating directions method of multipliers. Numer. Math. 130(3), 567–577 (2015)MathSciNetCrossRef B. He, X. Yuan, On non-ergodic convergence rate of Douglas-Rachford alternating directions method of multipliers. Numer. Math. 130(3), 567–577 (2015)MathSciNetCrossRef
14.
go back to reference B. He, L.-Z. Liao, D. Han, H. Yang, A new inexact alternating directions method for monotone variational inequalities. Math. Program. 92(1), 103–118 (2002)MathSciNetCrossRef B. He, L.-Z. Liao, D. Han, H. Yang, A new inexact alternating directions method for monotone variational inequalities. Math. Program. 92(1), 103–118 (2002)MathSciNetCrossRef
15.
go back to reference M. Jaggi, Revisiting Frank-Wolfe: projection free sparse convex optimization, in Proceedings of the 31th International Conference on Machine Learning, Atlanta, (2013), pp. 427–435 M. Jaggi, Revisiting Frank-Wolfe: projection free sparse convex optimization, in Proceedings of the 31th International Conference on Machine Learning, Atlanta, (2013), pp. 427–435
16.
go back to reference M. Jaggi, M. Sulovsk, A simple algorithm for nuclear norm regularized problems, in Proceedings of the 27th International Conference on Machine Learning, Haifa, (2010), pp. 471–478 M. Jaggi, M. Sulovsk, A simple algorithm for nuclear norm regularized problems, in Proceedings of the 27th International Conference on Machine Learning, Haifa, (2010), pp. 471–478
17.
go back to reference G. Lan, R.D. Monteiro, Iteration-complexity of first-order penalty methods for convex programming. Math. Program. 138(1–2), 115–139 (2013)MathSciNetCrossRef G. Lan, R.D. Monteiro, Iteration-complexity of first-order penalty methods for convex programming. Math. Program. 138(1–2), 115–139 (2013)MathSciNetCrossRef
19.
go back to reference H. Li, Z. Lin, Accelerated alternating direction method of multipliers: an optimal O(1∕K) nonergodic analysis. J. Sci. Comput. 79(2), 671–699 (2019)MathSciNetCrossRef H. Li, Z. Lin, Accelerated alternating direction method of multipliers: an optimal O(1∕K) nonergodic analysis. J. Sci. Comput. 79(2), 671–699 (2019)MathSciNetCrossRef
20.
go back to reference H. Li, C. Fang, Z. Lin, Convergence rates analysis of the quadratic penalty method and its applications to decentralized distributed optimization (2017). Preprint. arXiv:1711.10802 H. Li, C. Fang, Z. Lin, Convergence rates analysis of the quadratic penalty method and its applications to decentralized distributed optimization (2017). Preprint. arXiv:1711.10802
21.
go back to reference Z. Lin, M. Chen, Y. Ma, The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices (2010). Preprint. arXiv:1009.5055 Z. Lin, M. Chen, Y. Ma, The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices (2010). Preprint. arXiv:1009.5055
22.
go back to reference Z. Lin, R. Liu, H. Li, Linearized alternating direction method with parallel splitting and adaptive penalty for separable convex programs in machine learning. Mach. Learn. 99(2), 287–325 (2015)MathSciNetCrossRef Z. Lin, R. Liu, H. Li, Linearized alternating direction method with parallel splitting and adaptive penalty for separable convex programs in machine learning. Mach. Learn. 99(2), 287–325 (2015)MathSciNetCrossRef
23.
go back to reference J. Lu, M. Johansson, Convergence analysis of approximate primal solutions in dual first-order methods. SIAM J. Optim. 26(4), 2430–2467 (2016)MathSciNetCrossRef J. Lu, M. Johansson, Convergence analysis of approximate primal solutions in dual first-order methods. SIAM J. Optim. 26(4), 2430–2467 (2016)MathSciNetCrossRef
24.
go back to reference C. Lu, H. Li, Z. Lin, S. Yan, Fast proximal linearized alternating direction method of multiplier with parallel splitting, in Proceedings of the 30th AAAI Conference on Artificial Intelligence, Phoenix, (2016), pp. 739–745 C. Lu, H. Li, Z. Lin, S. Yan, Fast proximal linearized alternating direction method of multiplier with parallel splitting, in Proceedings of the 30th AAAI Conference on Artificial Intelligence, Phoenix, (2016), pp. 739–745
25.
26.
go back to reference I. Necoara, V. Nedelcu, Rate analysis of inexact dual first-order methods application to dual decomposition. IEEE Trans. Automat. Contr. 59(5), 1232–1243 (2014)MathSciNetCrossRef I. Necoara, V. Nedelcu, Rate analysis of inexact dual first-order methods application to dual decomposition. IEEE Trans. Automat. Contr. 59(5), 1232–1243 (2014)MathSciNetCrossRef
27.
go back to reference I. Necoara, A. Patrascu, Iteration complexity analysis of dual first-order methods for conic convex programming. Optim. Methods Softw. 31(3), 645–678 (2016)MathSciNetCrossRef I. Necoara, A. Patrascu, Iteration complexity analysis of dual first-order methods for conic convex programming. Optim. Methods Softw. 31(3), 645–678 (2016)MathSciNetCrossRef
28.
go back to reference I. Necoara, A. Patrascu, F. Glineur, Complexity of first-order inexact Lagrangian and penalty methods for conic convex programming. Optim. Methods Softw. 34(2), 305–335 (2019)MathSciNetCrossRef I. Necoara, A. Patrascu, F. Glineur, Complexity of first-order inexact Lagrangian and penalty methods for conic convex programming. Optim. Methods Softw. 34(2), 305–335 (2019)MathSciNetCrossRef
29.
go back to reference V.H. Nguyen, J.-J. Strodiot, Convergence rate results for a penalty function method, in Optimization Techniques (Springer, New York, 1978), pp. 101–106 V.H. Nguyen, J.-J. Strodiot, Convergence rate results for a penalty function method, in Optimization Techniques (Springer, New York, 1978), pp. 101–106
30.
go back to reference Y. Ouyang, Y. Chen, G. Lan, E. Pasiliao Jr., An accelerated linearized alternating direction method of multipliers. SIAM J. Imag. Sci. 8(1), 644–681 (2015)MathSciNetCrossRef Y. Ouyang, Y. Chen, G. Lan, E. Pasiliao Jr., An accelerated linearized alternating direction method of multipliers. SIAM J. Imag. Sci. 8(1), 644–681 (2015)MathSciNetCrossRef
31.
go back to reference P. Patrinos, A. Bemporad, An accelerated dual gradient projection algorithm for embedded linear model predictive control. IEEE Trans. Automat. Contr. 59(1), 18–33 (2013)MathSciNetCrossRef P. Patrinos, A. Bemporad, An accelerated dual gradient projection algorithm for embedded linear model predictive control. IEEE Trans. Automat. Contr. 59(1), 18–33 (2013)MathSciNetCrossRef
32.
go back to reference T. Pock, D. Cremers, H. Bischof, A. Chambolle, An algorithm for minimizing the Mumford-Shah functional, in Proceedings of the 12th International Conference on Computer Vision, Kyoto, (2009), pp. 1133–1140 T. Pock, D. Cremers, H. Bischof, A. Chambolle, An algorithm for minimizing the Mumford-Shah functional, in Proceedings of the 12th International Conference on Computer Vision, Kyoto, (2009), pp. 1133–1140
33.
go back to reference B.T. Polyak, The convergence rate of the penalty function method. USSR Comput. Math. Math. Phys. 11(1), 1–12 (1971)CrossRef B.T. Polyak, The convergence rate of the penalty function method. USSR Comput. Math. Math. Phys. 11(1), 1–12 (1971)CrossRef
34.
go back to reference R. Shefi, M. Teboulle, Rate of convergence analysis of decomposition methods based on the proximal method of multipliers for convex minimization. SIAM J. Optim. 24(1), 269–297 (2014)MathSciNetCrossRef R. Shefi, M. Teboulle, Rate of convergence analysis of decomposition methods based on the proximal method of multipliers for convex minimization. SIAM J. Optim. 24(1), 269–297 (2014)MathSciNetCrossRef
35.
go back to reference W. Tian, X. Yuan, An alternating direction method of multipliers with a worst-case o(1∕n 2) convergence rate. Math. Comput. 88(318), 1685–1713 (2019)MathSciNetCrossRef W. Tian, X. Yuan, An alternating direction method of multipliers with a worst-case o(1∕n 2) convergence rate. Math. Comput. 88(318), 1685–1713 (2019)MathSciNetCrossRef
36.
go back to reference P. Tseng, On accelerated proximal gradient methods for convex-concave optimization. Technical report, University of Washington, Seattle (2008) P. Tseng, On accelerated proximal gradient methods for convex-concave optimization. Technical report, University of Washington, Seattle (2008)
37.
go back to reference X. Wang, X. Yuan, The linearized alternating direction method of multipliers for Dantzig selector. SIAM J. Sci. Comput. 34(5), A2792–A2811 (2012)MathSciNetCrossRef X. Wang, X. Yuan, The linearized alternating direction method of multipliers for Dantzig selector. SIAM J. Sci. Comput. 34(5), A2792–A2811 (2012)MathSciNetCrossRef
38.
go back to reference Y. Xu, Accelerated first-order primal-dual proximal methods for linearly constrained composite convex programming. SIAM J. Optim. 27(3), 1459–1484 (2017)MathSciNetCrossRef Y. Xu, Accelerated first-order primal-dual proximal methods for linearly constrained composite convex programming. SIAM J. Optim. 27(3), 1459–1484 (2017)MathSciNetCrossRef
Metadata
Title
Accelerated Algorithms for Constrained Convex Optimization
Authors
Zhouchen Lin
Huan Li
Cong Fang
Copyright Year
2020
Publisher
Springer Singapore
DOI
https://doi.org/10.1007/978-981-15-2910-8_3

Premium Partner