Skip to main content

2013 | OriginalPaper | Buchkapitel

15. Algorithms for ℓ1-Minimization

verfasst von : Simon Foucart, Holger Rauhut

Erschienen in: A Mathematical Introduction to Compressive Sensing

Verlag: Springer New York

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This chapter presents a selection of three algorithms designed specifically to compute solutions of 1-minimization problems. The algorithms, chosen with simplicity of analysis and diversity of techniques in mind, are the homotopy method, Chambolle and Pock’s primal–dual algorithm, and the iteratively reweighted least squares algorithm. Other algorithms are also mentioned but discussed in less detail.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
4.
Zurück zum Zitat M. Afonso, J. Bioucas Dias, M. Figueiredo, Fast image recovery using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(9), 2345–2356 (2010)MathSciNetCrossRef M. Afonso, J. Bioucas Dias, M. Figueiredo, Fast image recovery using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(9), 2345–2356 (2010)MathSciNetCrossRef
20.
Zurück zum Zitat K. Arrow, L. Hurwicz, H. Uzawa, Studies in Linear and Non-linear Programming (Stanford University Press, Stanford, California, 1958)MATH K. Arrow, L. Hurwicz, H. Uzawa, Studies in Linear and Non-linear Programming (Stanford University Press, Stanford, California, 1958)MATH
32.
Zurück zum Zitat S. Bartels, Total variation minimization with finite elements: convergence and iterative solution. SIAM J. Numer. Anal. 50(3), 1162–1180 (2012)MathSciNetMATHCrossRef S. Bartels, Total variation minimization with finite elements: convergence and iterative solution. SIAM J. Numer. Anal. 50(3), 1162–1180 (2012)MathSciNetMATHCrossRef
34.
Zurück zum Zitat H. Bauschke, P. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces. CMS Books in Mathematics/Ouvrages de Mathématiques de la SMC (Springer, New York, 2011) H. Bauschke, P. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces. CMS Books in Mathematics/Ouvrages de Mathématiques de la SMC (Springer, New York, 2011)
36.
Zurück zum Zitat A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imag. Sci. 2(1), 183–202 (2009)MathSciNetMATHCrossRef A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imag. Sci. 2(1), 183–202 (2009)MathSciNetMATHCrossRef
37.
Zurück zum Zitat A. Beck, M. Teboulle, Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. Image Process. 18(11), 2419–2434 (2009)MathSciNetCrossRef A. Beck, M. Teboulle, Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. Image Process. 18(11), 2419–2434 (2009)MathSciNetCrossRef
38.
Zurück zum Zitat S. Becker, J. Bobin, E.J. Candès, NESTA: A fast and accurate first-order method for sparse recovery. SIAM J. Imaging Sci. 4(1), 1–39 (2011)MathSciNetMATHCrossRef S. Becker, J. Bobin, E.J. Candès, NESTA: A fast and accurate first-order method for sparse recovery. SIAM J. Imaging Sci. 4(1), 1–39 (2011)MathSciNetMATHCrossRef
44.
Zurück zum Zitat D. Bertsekas, J. Tsitsiklis, Parallel and Distributed Computation: Numerical Methods (Athena Scientific, Cambridge, MA, 1997) D. Bertsekas, J. Tsitsiklis, Parallel and Distributed Computation: Numerical Methods (Athena Scientific, Cambridge, MA, 1997)
49.
Zurück zum Zitat E. Birgin, J. Martínez, M. Raydan, Inexact spectral projected gradient methods on convex sets. IMA J. Numer. Anal. 23(4), 539–559 (2003)MathSciNetMATHCrossRef E. Birgin, J. Martínez, M. Raydan, Inexact spectral projected gradient methods on convex sets. IMA J. Numer. Anal. 23(4), 539–559 (2003)MathSciNetMATHCrossRef
70.
Zurück zum Zitat S. Boyd, L. Vandenberghe, Convex Optimization (Cambridge University Press, Cambridge, 2004)MATHCrossRef S. Boyd, L. Vandenberghe, Convex Optimization (Cambridge University Press, Cambridge, 2004)MATHCrossRef
71.
79.
Zurück zum Zitat M. Burger, M. Moeller, M. Benning, S. Osher, An adaptive inverse scale space method for compressed sensing. Math. Comp. 82(281), 269–299 (2013)MathSciNetMATHCrossRef M. Burger, M. Moeller, M. Benning, S. Osher, An adaptive inverse scale space method for compressed sensing. Math. Comp. 82(281), 269–299 (2013)MathSciNetMATHCrossRef
107.
Zurück zum Zitat A. Chambolle, T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imag. Vis. 40, 120–145 (2011)MathSciNetMATHCrossRef A. Chambolle, T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imag. Vis. 40, 120–145 (2011)MathSciNetMATHCrossRef
109.
Zurück zum Zitat R. Chartrand, Exact reconstruction of sparse signals via nonconvex minimization. IEEE Signal Process. Lett. 14(10), 707–710 (2007)CrossRef R. Chartrand, Exact reconstruction of sparse signals via nonconvex minimization. IEEE Signal Process. Lett. 14(10), 707–710 (2007)CrossRef
110.
Zurück zum Zitat R. Chartrand, V. Staneva, Restricted isometry properties and nonconvex compressive sensing. Inverse Probl. 24(3), 1–14 (2008)MathSciNetCrossRef R. Chartrand, V. Staneva, Restricted isometry properties and nonconvex compressive sensing. Inverse Probl. 24(3), 1–14 (2008)MathSciNetCrossRef
111.
Zurück zum Zitat R. Chartrand, W. Yin, Iteratively reweighted algorithms for compressive sensing. In 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP, Las Vegas, Nevada, USA, pp. 3869–3872, 2008 R. Chartrand, W. Yin, Iteratively reweighted algorithms for compressive sensing. In 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP, Las Vegas, Nevada, USA, pp. 3869–3872, 2008
112.
121.
127.
Zurück zum Zitat P. Combettes, J.-C. Pesquet, A Douglas-Rachford Splitting Approach to Nonsmooth Convex Variational Signal Recovery. IEEE J. Sel. Top. Signal Process. 1(4), 564–574 (2007)CrossRef P. Combettes, J.-C. Pesquet, A Douglas-Rachford Splitting Approach to Nonsmooth Convex Variational Signal Recovery. IEEE J. Sel. Top. Signal Process. 1(4), 564–574 (2007)CrossRef
128.
Zurück zum Zitat P. Combettes, J.-C. Pesquet, Proximal splitting methods in signal processing. In Fixed-Point Algorithms for Inverse Problems in Science and Engineering, ed. by H. Bauschke, R. Burachik, P. Combettes, V. Elser, D. Luke, H. Wolkowicz (Springer, New York, 2011), pp. 185–212 P. Combettes, J.-C. Pesquet, Proximal splitting methods in signal processing. In Fixed-Point Algorithms for Inverse Problems in Science and Engineering, ed. by H. Bauschke, R. Burachik, P. Combettes, V. Elser, D. Luke, H. Wolkowicz (Springer, New York, 2011), pp. 185–212
129.
Zurück zum Zitat P. Combettes, V. Wajs, Signal recovery by proximal forward-backward splitting. Multiscale Model. Sim. 4(4), 1168–1200 (electronic) (2005) P. Combettes, V. Wajs, Signal recovery by proximal forward-backward splitting. Multiscale Model. Sim. 4(4), 1168–1200 (electronic) (2005)
138.
Zurück zum Zitat I. Daubechies, M. Defrise, C. De Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Comm. Pure Appl. Math. 57(11), 1413–1457 (2004)MathSciNetMATHCrossRef I. Daubechies, M. Defrise, C. De Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Comm. Pure Appl. Math. 57(11), 1413–1457 (2004)MathSciNetMATHCrossRef
139.
Zurück zum Zitat I. Daubechies, R. DeVore, M. Fornasier, C. Güntürk, Iteratively re-weighted least squares minimization for sparse recovery. Comm. Pure Appl. Math. 63(1), 1–38 (2010)MathSciNetMATHCrossRef I. Daubechies, R. DeVore, M. Fornasier, C. Güntürk, Iteratively re-weighted least squares minimization for sparse recovery. Comm. Pure Appl. Math. 63(1), 1–38 (2010)MathSciNetMATHCrossRef
140.
Zurück zum Zitat I. Daubechies, M. Fornasier, I. Loris, Accelerated projected gradient methods for linear inverse problems with sparsity constraints. J. Fourier Anal. Appl. 14(5–6), 764–792 (2008)MathSciNetMATHCrossRef I. Daubechies, M. Fornasier, I. Loris, Accelerated projected gradient methods for linear inverse problems with sparsity constraints. J. Fourier Anal. Appl. 14(5–6), 764–792 (2008)MathSciNetMATHCrossRef
169.
Zurück zum Zitat D.L. Donoho, Y. Tsaig, Fast solution of l1-norm minimization problems when the solution may be sparse. IEEE Trans. Inform. Theor. 54(11), 4789–4812 (2008)MathSciNetMATHCrossRef D.L. Donoho, Y. Tsaig, Fast solution of l1-norm minimization problems when the solution may be sparse. IEEE Trans. Inform. Theor. 54(11), 4789–4812 (2008)MathSciNetMATHCrossRef
172.
Zurück zum Zitat J. Douglas, H. Rachford, On the numerical solution of heat conduction problems in two or three space variables. Trans. Am. Math. Soc. 82, 421–439 (1956)MathSciNetMATHCrossRef J. Douglas, H. Rachford, On the numerical solution of heat conduction problems in two or three space variables. Trans. Am. Math. Soc. 82, 421–439 (1956)MathSciNetMATHCrossRef
186.
Zurück zum Zitat H.W. Engl, M. Hanke, A. Neubauer, Regularization of Inverse Problems (Springer, New York, 1996)MATHCrossRef H.W. Engl, M. Hanke, A. Neubauer, Regularization of Inverse Problems (Springer, New York, 1996)MATHCrossRef
188.
Zurück zum Zitat E. Esser, Applications of Lagrangian-based alternating direction methods and connections to split Bregman. Preprint (2009) E. Esser, Applications of Lagrangian-based alternating direction methods and connections to split Bregman. Preprint (2009)
196.
Zurück zum Zitat M.A. Figueiredo, R.D. Nowak, An EM algorithm for wavelet-based image restoration. IEEE Trans. Image Process. 12(8), 906–916 (2003)MathSciNetCrossRef M.A. Figueiredo, R.D. Nowak, An EM algorithm for wavelet-based image restoration. IEEE Trans. Image Process. 12(8), 906–916 (2003)MathSciNetCrossRef
197.
Zurück zum Zitat M.A. Figueiredo, R.D. Nowak, S. Wright, Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Proces. 1(4), 586–598 (2007)CrossRef M.A. Figueiredo, R.D. Nowak, S. Wright, Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Proces. 1(4), 586–598 (2007)CrossRef
201.
Zurück zum Zitat M. Fornasier, Numerical methods for sparse recovery. In Theoretical Foundations and Numerical Methods for Sparse Recovery, ed. by M. Fornasier. Radon Series on Computational and Applied Mathematics, vol. 9 (de Gruyter, Berlin, 2010), pp. 93–200 M. Fornasier, Numerical methods for sparse recovery. In Theoretical Foundations and Numerical Methods for Sparse Recovery, ed. by M. Fornasier. Radon Series on Computational and Applied Mathematics, vol. 9 (de Gruyter, Berlin, 2010), pp. 93–200
202.
203.
Zurück zum Zitat M. Fornasier, H. Rauhut, Recovery algorithms for vector valued data with joint sparsity constraints. SIAM J. Numer. Anal. 46(2), 577–613 (2008)MathSciNetMATHCrossRef M. Fornasier, H. Rauhut, Recovery algorithms for vector valued data with joint sparsity constraints. SIAM J. Numer. Anal. 46(2), 577–613 (2008)MathSciNetMATHCrossRef
205.
Zurück zum Zitat M. Fornasier, H. Rauhut, R. Ward, Low-rank matrix recovery via iteratively reweighted least squares minimization. SIAM J. Optim. 21(4), 1614–1640 (2011)MathSciNetMATHCrossRef M. Fornasier, H. Rauhut, R. Ward, Low-rank matrix recovery via iteratively reweighted least squares minimization. SIAM J. Optim. 21(4), 1614–1640 (2011)MathSciNetMATHCrossRef
216.
Zurück zum Zitat D. Gabay, Applications of the method of multipliers to variational inequalities. In Augmented Lagrangian Methods: Applications to the Numerical Solution of Boundary-Value Problems, ed. by M. Fortin, R. Glowinski (North-Holland, Amsterdam, 1983), pp. 299–331CrossRef D. Gabay, Applications of the method of multipliers to variational inequalities. In Augmented Lagrangian Methods: Applications to the Numerical Solution of Boundary-Value Problems, ed. by M. Fortin, R. Glowinski (North-Holland, Amsterdam, 1983), pp. 299–331CrossRef
217.
Zurück zum Zitat D. Gabay, B. Mercier, A dual algorithm for the solution of nonlinear variational problems via finite elements approximations. Comput. Math. Appl. 2, 17–40 (1976)MATHCrossRef D. Gabay, B. Mercier, A dual algorithm for the solution of nonlinear variational problems via finite elements approximations. Comput. Math. Appl. 2, 17–40 (1976)MATHCrossRef
226.
Zurück zum Zitat R. Glowinski, T. Le, Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanics (SIAM, Philadelphia, 1989)MATHCrossRef R. Glowinski, T. Le, Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanics (SIAM, Philadelphia, 1989)MATHCrossRef
229.
Zurück zum Zitat D. Goldfarb, S. Ma, Convergence of fixed point continuation algorithms for matrix rank minimization. Found. Comput. Math. 11(2), 183–210 (2011)MathSciNetMATHCrossRef D. Goldfarb, S. Ma, Convergence of fixed point continuation algorithms for matrix rank minimization. Found. Comput. Math. 11(2), 183–210 (2011)MathSciNetMATHCrossRef
230.
256.
Zurück zum Zitat E. Hale, W. Yin, Y. Zhang, Fixed-point continuation for ℓ 1-minimization: methodology and convergence. SIAM J. Optim. 19(3), 1107–1130 (2008)MathSciNetMATHCrossRef E. Hale, W. Yin, Y. Zhang, Fixed-point continuation for 1-minimization: methodology and convergence. SIAM J. Optim. 19(3), 1107–1130 (2008)MathSciNetMATHCrossRef
257.
Zurück zum Zitat E. Hale, W. Yin, Y. Zhang, Fixed-point continuation applied to compressed sensing: implementation and numerical experiments. J. Comput. Math. 28(2), 170–194 (2010)MathSciNetMATH E. Hale, W. Yin, Y. Zhang, Fixed-point continuation applied to compressed sensing: implementation and numerical experiments. J. Comput. Math. 28(2), 170–194 (2010)MathSciNetMATH
264.
Zurück zum Zitat B. He, X. Yuan, Convergence analysis of primal-dual algorithms for a saddle-point problem: from contraction perspective. SIAM J. Imag. Sci. 5(1), 119–149 (2012)MathSciNetMATHCrossRef B. He, X. Yuan, Convergence analysis of primal-dual algorithms for a saddle-point problem: from contraction perspective. SIAM J. Imag. Sci. 5(1), 119–149 (2012)MathSciNetMATHCrossRef
302.
Zurück zum Zitat S. Kim, K. Koh, M. Lustig, S. Boyd, D. Gorinevsky, A method for large-scale l1-regularized least squares problems with applications in signal processing and statistics. IEEE J. Sel. Top. Signal Proces. 4(1), 606–617 (2007)CrossRef S. Kim, K. Koh, M. Lustig, S. Boyd, D. Gorinevsky, A method for large-scale l1-regularized least squares problems with applications in signal processing and statistics. IEEE J. Sel. Top. Signal Proces. 4(1), 606–617 (2007)CrossRef
303.
Zurück zum Zitat J. King, A minimal error conjugate gradient method for ill-posed problems. J. Optim. Theor. Appl. 60(2), 297–304 (1989)MATHCrossRef J. King, A minimal error conjugate gradient method for ill-posed problems. J. Optim. Theor. Appl. 60(2), 297–304 (1989)MATHCrossRef
318.
Zurück zum Zitat C. Lawson, Contributions to the Theory of Linear Least Maximum Approximation. PhD thesis, University of California, Los Angeles, 1961 C. Lawson, Contributions to the Theory of Linear Least Maximum Approximation. PhD thesis, University of California, Los Angeles, 1961
323.
Zurück zum Zitat J. Lee, Y. Sun, M. Saunders, Proximal Newton-type methods for convex optimization. Preprint (2012) J. Lee, Y. Sun, M. Saunders, Proximal Newton-type methods for convex optimization. Preprint (2012)
324.
Zurück zum Zitat B. Lemaire, The proximal algorithm. In New Methods in Optimization and Their Industrial Uses (Pau/Paris, 1987), Internationale Schriftenreihe Numerischen Mathematik, vol. 87 (Birkhäuser, Basel, 1989), pp. 73–87 B. Lemaire, The proximal algorithm. In New Methods in Optimization and Their Industrial Uses (Pau/Paris, 1987), Internationale Schriftenreihe Numerischen Mathematik, vol. 87 (Birkhäuser, Basel, 1989), pp. 73–87
328.
Zurück zum Zitat P.-L. Lions, B. Mercier, Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979)MathSciNetMATHCrossRef P.-L. Lions, B. Mercier, Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979)MathSciNetMATHCrossRef
334.
Zurück zum Zitat D. Lorenz, M. Pfetsch, A. Tillmann, Solving Basis Pursuit: Heuristic optimality check and solver comparison. Preprint (2011) D. Lorenz, M. Pfetsch, A. Tillmann, Solving Basis Pursuit: Heuristic optimality check and solver comparison. Preprint (2011)
335.
Zurück zum Zitat I. Loris, L1Packv2: a Mathematica package in minimizing an ℓ 1-penalized functional. Comput. Phys. Comm. 179(12), 895–902 (2008)MathSciNetMATHCrossRef I. Loris, L1Packv2: a Mathematica package in minimizing an 1-penalized functional. Comput. Phys. Comm. 179(12), 895–902 (2008)MathSciNetMATHCrossRef
345.
Zurück zum Zitat B. Martinet, Régularisation d’inéquations variationnelles par approximations successives. Rev. Française Informat. Recherche Opérationnelle 4(Ser. R-3), 154–158 (1970) B. Martinet, Régularisation d’inéquations variationnelles par approximations successives. Rev. Française Informat. Recherche Opérationnelle 4(Ser. R-3), 154–158 (1970)
365.
Zurück zum Zitat A. Nemirovsky, D. Yudin, Problem Complexity and Method Efficiency in Optimization. A Wiley-Interscience Publication. (Wiley, New York, 1983) A. Nemirovsky, D. Yudin, Problem Complexity and Method Efficiency in Optimization. A Wiley-Interscience Publication. (Wiley, New York, 1983)
366.
Zurück zum Zitat Y. Nesterov, A method for solving the convex programming problem with convergence rate O(1 ∕ k 2). Dokl. Akad. Nauk SSSR 269(3), 543–547 (1983)MathSciNet Y. Nesterov, A method for solving the convex programming problem with convergence rate O(1 ∕ k 2). Dokl. Akad. Nauk SSSR 269(3), 543–547 (1983)MathSciNet
367.
Zurück zum Zitat Y. Nesterov, Smooth minimization of non-smooth functions. Math. Program. 103(1, Ser. A), 127–152 (2005) Y. Nesterov, Smooth minimization of non-smooth functions. Math. Program. 103(1, Ser. A), 127–152 (2005)
369.
Zurück zum Zitat J. Nocedal, S. Wright, Numerical Optimization, 2nd edn. Springer Series in Operations Research and Financial Engineering (Springer, New York, 2006) J. Nocedal, S. Wright, Numerical Optimization, 2nd edn. Springer Series in Operations Research and Financial Engineering (Springer, New York, 2006)
375.
Zurück zum Zitat M. Osborne, Finite Algorithms in Optimization and Data Analysis (Wiley, New York, 1985)MATH M. Osborne, Finite Algorithms in Optimization and Data Analysis (Wiley, New York, 1985)MATH
376.
Zurück zum Zitat M. Osborne, B. Presnell, B. Turlach, A new approach to variable selection in least squares problems. IMA J. Numer. Anal. 20(3), 389–403 (2000)MathSciNetMATHCrossRef M. Osborne, B. Presnell, B. Turlach, A new approach to variable selection in least squares problems. IMA J. Numer. Anal. 20(3), 389–403 (2000)MathSciNetMATHCrossRef
377.
Zurück zum Zitat M. Osborne, B. Presnell, B. Turlach, On the LASSO and its dual. J. Comput. Graph. Stat. 9(2), 319–337 (2000)MathSciNet M. Osborne, B. Presnell, B. Turlach, On the LASSO and its dual. J. Comput. Graph. Stat. 9(2), 319–337 (2000)MathSciNet
395.
Zurück zum Zitat T. Pock, A. Chambolle, Diagonal preconditioning for first order primal-dual algorithms in convex optimization. In IEEE International Conference Computer Vision (ICCV) 2011, pp. 1762 –1769, November 2011 T. Pock, A. Chambolle, Diagonal preconditioning for first order primal-dual algorithms in convex optimization. In IEEE International Conference Computer Vision (ICCV) 2011, pp. 1762 –1769, November 2011
396.
Zurück zum Zitat T. Pock, D. Cremers, H. Bischof, A. Chambolle, An algorithm for minimizing the Mumford-Shah functional. In ICCV Proceedings (Springer, Berlin, 2009) T. Pock, D. Cremers, H. Bischof, A. Chambolle, An algorithm for minimizing the Mumford-Shah functional. In ICCV Proceedings (Springer, Berlin, 2009)
405.
Zurück zum Zitat H. Raguet, J. Fadili, G. Peyré, A generalized forward-backward splitting. Preprint (2011) H. Raguet, J. Fadili, G. Peyré, A generalized forward-backward splitting. Preprint (2011)
423.
446.
Zurück zum Zitat S. Setzer, Operator splittings, Bregman methods and frame shrinkage in image processing. Int. J. Comput. Vis. 92(3), 265–280 (2011)MathSciNetMATHCrossRef S. Setzer, Operator splittings, Bregman methods and frame shrinkage in image processing. Int. J. Comput. Vis. 92(3), 265–280 (2011)MathSciNetMATHCrossRef
451.
Zurück zum Zitat J.-L. Starck, F. Murtagh, J. Fadili, Sparse Image and Signal Processing: Wavelets, Curvelets, Morphological Diversity (Cambridge University Press, Cambridge, 2010)CrossRef J.-L. Starck, F. Murtagh, J. Fadili, Sparse Image and Signal Processing: Wavelets, Curvelets, Morphological Diversity (Cambridge University Press, Cambridge, 2010)CrossRef
493.
Zurück zum Zitat E. van den Berg, M. Friedlander, Probing the Pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 31(2), 890–912 (2008)MathSciNetMATHCrossRef E. van den Berg, M. Friedlander, Probing the Pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 31(2), 890–912 (2008)MathSciNetMATHCrossRef
511.
Zurück zum Zitat S. Worm, Iteratively re-weighted least squares for compressed sensing. Diploma thesis, University of Bonn, 2011 S. Worm, Iteratively re-weighted least squares for compressed sensing. Diploma thesis, University of Bonn, 2011
516.
Zurück zum Zitat X. Zhang, M. Burger, X. Bresson, S. Osher, Bregmanized nonlocal regularization for deconvolution and sparse reconstruction. SIAM J. Imag. Sci. 3(3), 253–276 (2010)MathSciNetMATHCrossRef X. Zhang, M. Burger, X. Bresson, S. Osher, Bregmanized nonlocal regularization for deconvolution and sparse reconstruction. SIAM J. Imag. Sci. 3(3), 253–276 (2010)MathSciNetMATHCrossRef
517.
Zurück zum Zitat X. Zhang, M. Burger, S. Osher, A unified primal-dual algorithm framework based on Bregman iteration. J. Sci. Comput. 46, 20–46 (2011)MathSciNetMATHCrossRef X. Zhang, M. Burger, S. Osher, A unified primal-dual algorithm framework based on Bregman iteration. J. Sci. Comput. 46, 20–46 (2011)MathSciNetMATHCrossRef
518.
Zurück zum Zitat M. Zhu, T. Chan, An efficient primal-dual hybrid gradient algorithm for total variation image restoration. Technical report, CAM Report 08–34, UCLA, Los Angeles, CA, 2008 M. Zhu, T. Chan, An efficient primal-dual hybrid gradient algorithm for total variation image restoration. Technical report, CAM Report 08–34, UCLA, Los Angeles, CA, 2008
Metadaten
Titel
Algorithms for ℓ1-Minimization
verfasst von
Simon Foucart
Holger Rauhut
Copyright-Jahr
2013
Verlag
Springer New York
DOI
https://doi.org/10.1007/978-0-8176-4948-7_15

Premium Partner