Skip to main content
Erschienen in: Journal of Scientific Computing 2-3/2013

01.02.2013

Accelerated Linearized Bregman Method

verfasst von: Bo Huang, Shiqian Ma, Donald Goldfarb

Erschienen in: Journal of Scientific Computing | Ausgabe 2-3/2013

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In this paper, we propose and analyze an accelerated linearized Bregman (ALB) method for solving the basis pursuit and related sparse optimization problems. This accelerated algorithm is based on the fact that the linearized Bregman (LB) algorithm first proposed by Stanley Osher and his collaborators is equivalent to a gradient descent method applied to a certain dual formulation. We show that the LB method requires O(1/ϵ) iterations to obtain an ϵ-optimal solution and the ALB algorithm reduces this iteration complexity to \(O(1/\sqrt{\epsilon})\) while requiring almost the same computational effort on each iteration. Numerical results on compressed sensing and matrix completion problems are presented that demonstrate that the ALB method can be significantly faster than the LB method.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Literatur
2.
Zurück zum Zitat Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009) MathSciNetMATHCrossRef Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009) MathSciNetMATHCrossRef
3.
Zurück zum Zitat Becker, S., Candès, E.J., Grant, M.: Templates for convex cone problems with applications to sparse signal recovery. Math. Program. Comput. 3, 165–218 (2011) MathSciNetMATHCrossRef Becker, S., Candès, E.J., Grant, M.: Templates for convex cone problems with applications to sparse signal recovery. Math. Program. Comput. 3, 165–218 (2011) MathSciNetMATHCrossRef
4.
Zurück zum Zitat Bertsekas, D.P., Tsitsiklis, J.N.: Parallel and Distributed Computation: Numerical Methods. Prentice-Hall, Upper Saddle River (1989) MATH Bertsekas, D.P., Tsitsiklis, J.N.: Parallel and Distributed Computation: Numerical Methods. Prentice-Hall, Upper Saddle River (1989) MATH
5.
Zurück zum Zitat Bregman, L.: The relaxation method of finding the common points of convex sets and its application to the solution of problems in convex programming. U.S.S.R. Comput. Math. Math. Phys. 7, 200–217 (1967) CrossRef Bregman, L.: The relaxation method of finding the common points of convex sets and its application to the solution of problems in convex programming. U.S.S.R. Comput. Math. Math. Phys. 7, 200–217 (1967) CrossRef
6.
Zurück zum Zitat Cai, J., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20, 1956–1982 (2010) MathSciNetMATHCrossRef Cai, J., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20, 1956–1982 (2010) MathSciNetMATHCrossRef
7.
8.
Zurück zum Zitat Cai, J.-F., Osher, S., Shen, Z.: Convergence of the linearized Bregman iteration for ℓ 1-norm minimization. Math. Comput. 78, 2127–2136 (2009) MathSciNetMATHCrossRef Cai, J.-F., Osher, S., Shen, Z.: Convergence of the linearized Bregman iteration for 1-norm minimization. Math. Comput. 78, 2127–2136 (2009) MathSciNetMATHCrossRef
9.
10.
Zurück zum Zitat Candès, E.J., Tao, T.: The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory 56, 2053–2080 (2009) CrossRef Candès, E.J., Tao, T.: The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory 56, 2053–2080 (2009) CrossRef
11.
Zurück zum Zitat Candès, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489–509 (2006) MATHCrossRef Candès, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489–509 (2006) MATHCrossRef
15.
Zurück zum Zitat Goldfarb, D., Ma, S.: Convergence of fixed point continuation algorithms for matrix rank minimization. Found. Comput. Math. 11, 183–210 (2011) MathSciNetMATHCrossRef Goldfarb, D., Ma, S.: Convergence of fixed point continuation algorithms for matrix rank minimization. Found. Comput. Math. 11, 183–210 (2011) MathSciNetMATHCrossRef
16.
Zurück zum Zitat Goldfarb, D., Ma, S., Scheinberg, K.: Fast alternating linearization methods for minimizing the sum of two convex functions. Math. Program. Ser. A (2012). doi:10.1007/s10107-012-0530-2 Goldfarb, D., Ma, S., Scheinberg, K.: Fast alternating linearization methods for minimizing the sum of two convex functions. Math. Program. Ser. A (2012). doi:10.​1007/​s10107-012-0530-2
17.
Zurück zum Zitat Goldfarb, D., Scheinberg, K.: Fast first-order methods for composite convex optimization with line search. Technical Report, Department of IEOR, Columbia University (2011) Goldfarb, D., Scheinberg, K.: Fast first-order methods for composite convex optimization with line search. Technical Report, Department of IEOR, Columbia University (2011)
18.
Zurück zum Zitat Gross, D.: Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Inf. Theory 57, 1548–1566 (2011) CrossRef Gross, D.: Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Inf. Theory 57, 1548–1566 (2011) CrossRef
19.
Zurück zum Zitat Hale, E.T., Yin, W., Zhang, Y.: Fixed-point continuation for ℓ 1-minimization: methodology and convergence. SIAM J. Optim. 19, 1107–1130 (2008) MathSciNetMATHCrossRef Hale, E.T., Yin, W., Zhang, Y.: Fixed-point continuation for 1-minimization: methodology and convergence. SIAM J. Optim. 19, 1107–1130 (2008) MathSciNetMATHCrossRef
21.
Zurück zum Zitat Liu, D.C., Nocedal, J.: On the limited memory BFGS method for large scale optimization. Math. Program., Ser. B 45, 503–528 (1989) MathSciNetMATHCrossRef Liu, D.C., Nocedal, J.: On the limited memory BFGS method for large scale optimization. Math. Program., Ser. B 45, 503–528 (1989) MathSciNetMATHCrossRef
22.
23.
Zurück zum Zitat Ma, S., Goldfarb, D., Chen, L.: Fixed point and Bregman iterative methods for matrix rank minimization. Math. Program., Ser. A 128, 321–353 (2011) MathSciNetMATHCrossRef Ma, S., Goldfarb, D., Chen, L.: Fixed point and Bregman iterative methods for matrix rank minimization. Math. Program., Ser. A 128, 321–353 (2011) MathSciNetMATHCrossRef
24.
26.
Zurück zum Zitat Nemirovski, A.: Prox-method with rate of convergence O(1/t) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM J. Optim. 15, 229–251 (2005) MathSciNetCrossRef Nemirovski, A.: Prox-method with rate of convergence O(1/t) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM J. Optim. 15, 229–251 (2005) MathSciNetCrossRef
27.
Zurück zum Zitat Nesterov, Y.E.: A method for unconstrained convex minimization problem with the rate of convergence \(\mathcal{O}(1/k^{2})\). Dokl. Akad. Nauk SSSR 269, 543–547 (1983) MathSciNet Nesterov, Y.E.: A method for unconstrained convex minimization problem with the rate of convergence \(\mathcal{O}(1/k^{2})\). Dokl. Akad. Nauk SSSR 269, 543–547 (1983) MathSciNet
28.
Zurück zum Zitat Nesterov, Y.E.: In: Introductory Lectures on Convex Optimization, vol. 87, p. xviii+236 (2004). A basic course Nesterov, Y.E.: In: Introductory Lectures on Convex Optimization, vol. 87, p. xviii+236 (2004). A basic course
30.
Zurück zum Zitat Nesterov, Y.E.: Gradient methods for minimizing composite objective function. CORE Discussion Paper 2007/76 (2007) Nesterov, Y.E.: Gradient methods for minimizing composite objective function. CORE Discussion Paper 2007/76 (2007)
31.
Zurück zum Zitat Osher, S., Burger, M., Goldfarb, D., Xu, J., Yin, W.: An iterative regularization method for total variation-based image restoration. Multiscale Model. Simul. 4, 460–489 (2005) MathSciNetMATHCrossRef Osher, S., Burger, M., Goldfarb, D., Xu, J., Yin, W.: An iterative regularization method for total variation-based image restoration. Multiscale Model. Simul. 4, 460–489 (2005) MathSciNetMATHCrossRef
32.
Zurück zum Zitat Osher, S., Mao, Y., Dong, B., Yin, W.: Fast linearized Bregman iteration for compressive sensing and sparse denoising. Commun. Math. Sci. 8, 93–111 (2010) MathSciNetMATH Osher, S., Mao, Y., Dong, B., Yin, W.: Fast linearized Bregman iteration for compressive sensing and sparse denoising. Commun. Math. Sci. 8, 93–111 (2010) MathSciNetMATH
33.
Zurück zum Zitat Powell, M.J.D.: A method for nonlinear constraints in minimization problems. In: Fletcher, R. (ed.) Optimization, pp. 283–298. Academic Press, New York (1972) Powell, M.J.D.: A method for nonlinear constraints in minimization problems. In: Fletcher, R. (ed.) Optimization, pp. 283–298. Academic Press, New York (1972)
34.
Zurück zum Zitat Recht, B.: A simpler approach to matrix completion. J. Mach. Learn. Res. 12, 3413–3430 (2011) MathSciNet Recht, B.: A simpler approach to matrix completion. J. Mach. Learn. Res. 12, 3413–3430 (2011) MathSciNet
35.
Zurück zum Zitat Recht, B., Fazel, M., Parrilo, P.: Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52, 471–501 (2010) MathSciNetMATHCrossRef Recht, B., Fazel, M., Parrilo, P.: Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52, 471–501 (2010) MathSciNetMATHCrossRef
36.
Zurück zum Zitat Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970) MATH Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970) MATH
37.
Zurück zum Zitat Rockafellar, R.T.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1, 97–116 (1976) MathSciNetMATHCrossRef Rockafellar, R.T.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1, 97–116 (1976) MathSciNetMATHCrossRef
39.
Zurück zum Zitat Srebro, N.: Learning with matrix factorizations. PhD thesis, Massachusetts Institute of Technology (2004) Srebro, N.: Learning with matrix factorizations. PhD thesis, Massachusetts Institute of Technology (2004)
40.
Zurück zum Zitat Srebro, N., Jaakkola, T.: Weighted low-rank approximations. In: Proceedings of the Twentieth International Conference on Machine Learning (ICML-2003) (2003) Srebro, N., Jaakkola, T.: Weighted low-rank approximations. In: Proceedings of the Twentieth International Conference on Machine Learning (ICML-2003) (2003)
41.
Zurück zum Zitat Toh, K.-C., Yun, S.: An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems. Pac. J. Optim. 6, 615–640 (2010) MathSciNetMATH Toh, K.-C., Yun, S.: An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems. Pac. J. Optim. 6, 615–640 (2010) MathSciNetMATH
42.
Zurück zum Zitat Tseng, P.: On accelerated proximal gradient methods for convex-concave optimization. SIAM J. Optim. (2008, submitted) Tseng, P.: On accelerated proximal gradient methods for convex-concave optimization. SIAM J. Optim. (2008, submitted)
43.
Zurück zum Zitat van den Berg, E., Friedlander, M.P.: Probing the Pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 31, 890–912 (2008) MathSciNetMATHCrossRef van den Berg, E., Friedlander, M.P.: Probing the Pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 31, 890–912 (2008) MathSciNetMATHCrossRef
44.
Zurück zum Zitat Wen, Z., Yin, W., Zhang, Y.: Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Technical Report, Department of CAAM, Rice University (2010) Wen, Z., Yin, W., Zhang, Y.: Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Technical Report, Department of CAAM, Rice University (2010)
45.
46.
Zurück zum Zitat Yin, W., Osher, S., Goldfarb, D., Darbon, J.: Bregman iterative algorithms for ℓ 1-minimization with applications to compressed sensing. SIAM J. Imaging Sci. 1, 143–168 (2008) MathSciNetMATHCrossRef Yin, W., Osher, S., Goldfarb, D., Darbon, J.: Bregman iterative algorithms for 1-minimization with applications to compressed sensing. SIAM J. Imaging Sci. 1, 143–168 (2008) MathSciNetMATHCrossRef
Metadaten
Titel
Accelerated Linearized Bregman Method
verfasst von
Bo Huang
Shiqian Ma
Donald Goldfarb
Publikationsdatum
01.02.2013
Verlag
Springer US
Erschienen in
Journal of Scientific Computing / Ausgabe 2-3/2013
Print ISSN: 0885-7474
Elektronische ISSN: 1573-7691
DOI
https://doi.org/10.1007/s10915-012-9592-9

Weitere Artikel der Ausgabe 2-3/2013

Journal of Scientific Computing 2-3/2013 Zur Ausgabe