Skip to main content
Top
Published in: Journal of Scientific Computing 1/2020

01-07-2020

Stochastic Primal Dual Fixed Point Method for Composite Optimization

Authors: Ya-Nan Zhu, Xiaoqun Zhang

Published in: Journal of Scientific Computing | Issue 1/2020

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

In this paper we propose a stochastic primal dual fixed point method for solving the sum of two proper lower semi-continuous convex function and one of which is composite. The method is based on the primal dual fixed point method proposed in Chen et al. (Inverse Probl 29:025011, 2013) that does not require subproblem solving. Under some mild condition, the convergence is established based on two sets of assumptions: bounded and unbounded gradients and the convergence rate of the expected error of iterate is of the order \({\mathcal {O}}(k^{-\alpha })\) where k is iteration number and \(\alpha \in (0,1]\). Finally, numerical examples on graphic Lasso and logistic regressions are given to demonstrate the effectiveness of the proposed algorithm.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Appendix
Available only for authorised users
Literature
1.
go back to reference Chen, P., Huang, J., Zhang, X.: A primal–dual fixed point algorithm for convex separable minimization with applications to image restoration. Inverse Probl. 29, 025011 (2013)MathSciNetCrossRef Chen, P., Huang, J., Zhang, X.: A primal–dual fixed point algorithm for convex separable minimization with applications to image restoration. Inverse Probl. 29, 025011 (2013)MathSciNetCrossRef
2.
go back to reference Tibshirani, R.J.: The Solution Path of the Generalized Lasso. Stanford University, Stanford (2011)CrossRef Tibshirani, R.J.: The Solution Path of the Generalized Lasso. Stanford University, Stanford (2011)CrossRef
3.
go back to reference Vapnik, V.: The Nature of Statistical Learning Theory. Springer Science & Business Media (2003) Vapnik, V.: The Nature of Statistical Learning Theory. Springer Science & Business Media (2003)
4.
go back to reference Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 4, 1168–1200 (2005)MathSciNetCrossRef Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 4, 1168–1200 (2005)MathSciNetCrossRef
5.
go back to reference Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009)MathSciNetCrossRef Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009)MathSciNetCrossRef
6.
go back to reference Nesterov, Y.E.: A method for solving the convex programming problem with convergence rate o (\(1/\text{k}^{2}\)). In: Doklady Akademii Nauk SSSR, vol. 269, pp. 543–547 (1983) Nesterov, Y.E.: A method for solving the convex programming problem with convergence rate o (\(1/\text{k}^{2}\)). In: Doklady Akademii Nauk SSSR, vol. 269, pp. 543–547 (1983)
8.
9.
go back to reference Duchi, J., Singer, Y.: Efficient online and batch learning using forward backward splitting. J. Mach. Learn. Res. 10, 2899–2934 (2009)MathSciNetMATH Duchi, J., Singer, Y.: Efficient online and batch learning using forward backward splitting. J. Mach. Learn. Res. 10, 2899–2934 (2009)MathSciNetMATH
11.
go back to reference Shalev-Shwartz, S., Zhang, T.: Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization. In: International Conference on Machine Learning, pp. 64–72 (2014) Shalev-Shwartz, S., Zhang, T.: Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization. In: International Conference on Machine Learning, pp. 64–72 (2014)
12.
go back to reference Xiao, L., Zhang, T.: A proximal stochastic gradient method with progressive variance reduction. SIAM J. Optim. 24, 2057–2075 (2014)MathSciNetCrossRef Xiao, L., Zhang, T.: A proximal stochastic gradient method with progressive variance reduction. SIAM J. Optim. 24, 2057–2075 (2014)MathSciNetCrossRef
13.
go back to reference Eckstein, J., Bertsekas, D.P.: On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55, 293–318 (1992)MathSciNetCrossRef Eckstein, J., Bertsekas, D.P.: On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55, 293–318 (1992)MathSciNetCrossRef
14.
go back to reference Setzer, S.: Split Bregman algorithm, Douglas–Rachford splitting and frame shrinkage. In: International Conference on Scale Space and Variational Methods in Computer Vision, pp. 464–476. Springer (2009) Setzer, S.: Split Bregman algorithm, Douglas–Rachford splitting and frame shrinkage. In: International Conference on Scale Space and Variational Methods in Computer Vision, pp. 464–476. Springer (2009)
15.
go back to reference He, B., Yuan, X.: On the o(1/n) convergence rate of the Douglas–Rachford alternating direction method. SIAM J. Numer. Anal. 50, 700–709 (2012)MathSciNetCrossRef He, B., Yuan, X.: On the o(1/n) convergence rate of the Douglas–Rachford alternating direction method. SIAM J. Numer. Anal. 50, 700–709 (2012)MathSciNetCrossRef
16.
go back to reference Deng, W., Yin, W.: On the global and linear convergence of the generalized alternating direction method of multipliers. J. Sci. Comput. 66, 889–916 (2016)MathSciNetCrossRef Deng, W., Yin, W.: On the global and linear convergence of the generalized alternating direction method of multipliers. J. Sci. Comput. 66, 889–916 (2016)MathSciNetCrossRef
17.
go back to reference Zhu, M., Chan, T.F.: An Efficient Primal–Dual Hybrid Gradient Algorithm for Total Variation Image Restoration, CAM Report 08–34. UCLA, Los Angeles, CA (2008) Zhu, M., Chan, T.F.: An Efficient Primal–Dual Hybrid Gradient Algorithm for Total Variation Image Restoration, CAM Report 08–34. UCLA, Los Angeles, CA (2008)
18.
go back to reference Esser, E., Zhang, X., Chan, T.F.: A General Frame Work for a Class of First Order Primal Dual Algorithms for Convex Optimization in Imaging Science, CAM Report 08–34. UCLA, Los Angeles, CA (2008) Esser, E., Zhang, X., Chan, T.F.: A General Frame Work for a Class of First Order Primal Dual Algorithms for Convex Optimization in Imaging Science, CAM Report 08–34. UCLA, Los Angeles, CA (2008)
19.
go back to reference Chambolle, A., Pock, T.: A first-order primal–dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40(1), 120145 (2011)MathSciNetCrossRef Chambolle, A., Pock, T.: A first-order primal–dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40(1), 120145 (2011)MathSciNetCrossRef
20.
go back to reference Micchelli, C.A., Shen, L., Yuesheng, X.: Proximity algorithms for image models: denoising. Inverse Probl. 27, 045009 (2011)MathSciNetCrossRef Micchelli, C.A., Shen, L., Yuesheng, X.: Proximity algorithms for image models: denoising. Inverse Probl. 27, 045009 (2011)MathSciNetCrossRef
21.
go back to reference Ouyang, H., He, N., Tran, L., Gray, A.: Stochastic alternating direction method of multipliers. In: International Conference on Machine Learning, pp. 80–88 (2013) Ouyang, H., He, N., Tran, L., Gray, A.: Stochastic alternating direction method of multipliers. In: International Conference on Machine Learning, pp. 80–88 (2013)
22.
go back to reference Xiao, L.: Dual averaging methods for regularized stochastic learning and online optimization. J. Mach. Learn. Res. 11, 2543–2596 (2010)MathSciNetMATH Xiao, L.: Dual averaging methods for regularized stochastic learning and online optimization. J. Mach. Learn. Res. 11, 2543–2596 (2010)MathSciNetMATH
23.
go back to reference Duchi, J., Singer, Y.: Efficient online and batch learning using forward backward splitting. J. Mach. Learn. Res. 10, 2873–2908 (2009)MathSciNetMATH Duchi, J., Singer, Y.: Efficient online and batch learning using forward backward splitting. J. Mach. Learn. Res. 10, 2873–2908 (2009)MathSciNetMATH
24.
go back to reference Suzuki, T.: Dual averaging and proximal gradient descent for online alternating direction multiplier method. In: International Conference on Machine Learning, pp. 392–400 (2013) Suzuki, T.: Dual averaging and proximal gradient descent for online alternating direction multiplier method. In: International Conference on Machine Learning, pp. 392–400 (2013)
25.
go back to reference Zhong, W., Kwok, J.: Fast stochastic alternating direction method of multipliers. In: International Conference on Machine Learning, pp. 46–54 (2014) Zhong, W., Kwok, J.: Fast stochastic alternating direction method of multipliers. In: International Conference on Machine Learning, pp. 46–54 (2014)
26.
go back to reference Le Roux, N., Schmidt, M., Bach, F.R: A stochastic gradient method with an exponential convergence rate for finite training sets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2012) Le Roux, N., Schmidt, M., Bach, F.R: A stochastic gradient method with an exponential convergence rate for finite training sets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2012)
27.
go back to reference Zhao, S.-Y., Li, W.-J., Zhou, Z.-H.: Scalable stochastic alternating direction method of multipliers (2015). arXiv preprint arXiv:1502.03529 Zhao, S.-Y., Li, W.-J., Zhou, Z.-H.: Scalable stochastic alternating direction method of multipliers (2015). arXiv preprint arXiv:​1502.​03529
28.
go back to reference Suzuki, T.: Stochastic dual coordinate ascent with alternating direction method of multipliers. In: International Conference on Machine Learning, pp. 736–744 (2014) Suzuki, T.: Stochastic dual coordinate ascent with alternating direction method of multipliers. In: International Conference on Machine Learning, pp. 736–744 (2014)
29.
go back to reference Zheng, S., Kwok, J.T.: Fast-and-light stochastic ADMM. In: IJCAI, pp. 2407–2613 (2016) Zheng, S., Kwok, J.T.: Fast-and-light stochastic ADMM. In: IJCAI, pp. 2407–2613 (2016)
30.
go back to reference Chambolle, A., Ehrhardt, M.J., Richtarik, P., Schonlieb, C.-B.: Stochastic primal-dual hybrid gradient algorithm with arbitrary sampling and imaging applications. SIAM J. Optimiz. 28(4), 2783–2808 (2018)MathSciNetCrossRef Chambolle, A., Ehrhardt, M.J., Richtarik, P., Schonlieb, C.-B.: Stochastic primal-dual hybrid gradient algorithm with arbitrary sampling and imaging applications. SIAM J. Optimiz. 28(4), 2783–2808 (2018)MathSciNetCrossRef
31.
go back to reference Chen, Y., Lan, G., Ouyang, Y.: Optimal primal–dual methods for a class of saddle point problems. SIAM J. Optim. 24(4), 1779–1814 (2014)MathSciNetCrossRef Chen, Y., Lan, G., Ouyang, Y.: Optimal primal–dual methods for a class of saddle point problems. SIAM J. Optim. 24(4), 1779–1814 (2014)MathSciNetCrossRef
32.
go back to reference Moulines, E., Bach, F.R: Non-asymptotic analysis of stochastic approximation algorithms for machine learning. In: Advances in Neural Information Processing Systems, pp. 451–459 (2011) Moulines, E., Bach, F.R: Non-asymptotic analysis of stochastic approximation algorithms for machine learning. In: Advances in Neural Information Processing Systems, pp. 451–459 (2011)
33.
go back to reference Polyak, B.T.: Introduction to Optimization, vol. 1. Optimization Software, Inc., Publications Division, New York (1987)MATH Polyak, B.T.: Introduction to Optimization, vol. 1. Optimization Software, Inc., Publications Division, New York (1987)MATH
34.
go back to reference Kim, S., Sohn, K.-A., Xing, E.P.: A multivariate regression approach to association analysis of a quantitative trait network. Bioinformatics 25(12), i204–i212 (2009)CrossRef Kim, S., Sohn, K.-A., Xing, E.P.: A multivariate regression approach to association analysis of a quantitative trait network. Bioinformatics 25(12), i204–i212 (2009)CrossRef
35.
go back to reference Chang, C.-C., Lin, C.-J.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2, 27:1–27:27 (2011)CrossRef Chang, C.-C., Lin, C.-J.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2, 27:1–27:27 (2011)CrossRef
36.
go back to reference Banerjee, O., El Ghaoui, L., dAspremont, A.: Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. J. Mach. Learn. Res. 9, 485–516 (2008)MathSciNetMATH Banerjee, O., El Ghaoui, L., dAspremont, A.: Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. J. Mach. Learn. Res. 9, 485–516 (2008)MathSciNetMATH
37.
go back to reference Friedman, J., Hastie, T., Tibshirani, R.: Sparse inverse covariance estimation with the graphical lasso. Biostatistics 9, 432–441 (2008)CrossRef Friedman, J., Hastie, T., Tibshirani, R.: Sparse inverse covariance estimation with the graphical lasso. Biostatistics 9, 432–441 (2008)CrossRef
Metadata
Title
Stochastic Primal Dual Fixed Point Method for Composite Optimization
Authors
Ya-Nan Zhu
Xiaoqun Zhang
Publication date
01-07-2020
Publisher
Springer US
Published in
Journal of Scientific Computing / Issue 1/2020
Print ISSN: 0885-7474
Electronic ISSN: 1573-7691
DOI
https://doi.org/10.1007/s10915-020-01265-2

Other articles of this Issue 1/2020

Journal of Scientific Computing 1/2020 Go to the issue

Premium Partner