Skip to main content

2014 | OriginalPaper | Buchkapitel

3. Sparse Signal Recovery with Exponential-Family Noise

verfasst von : Irina Rish, Genady Grabarnik

Erschienen in: Compressed Sensing & Sparse Filtering

Verlag: Springer Berlin Heidelberg

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The problem of sparse signal recovery from a relatively small number of noisy measurements has been studied extensively in the recent compressed sensing literature. Typically, the signal reconstruction problem is formulated as \(l_1\)-regularized linear regression. From a statistical point of view, this problem is equivalent to maximum a posteriori probability (MAP) parameter estimation with Laplace prior on the vector of parameters (i.e., signal) and linear measurements disturbed by Gaussian noise. Classical results in compressed sensing (e.g., [7]) state sufficient conditions for accurate recovery of noisy signals in such linear-regression setting. A natural question to ask is whether one can accurately recover sparse signals under different noise assumptions. Herein, we extend the results of [7] to the general case of exponential-family noise that includes Gaussian noise as a particular case; the recovery problem is then formulated as \(l_1\)-regularized Generalized Linear Model (GLM) regression. We show that, under standard restricted isometry property (RIP) assumptions on the design matrix, \(l_1\)-minimization can provide stable recovery of a sparse signal in presence of exponential-family noise, and state some sufficient conditions on the noise distribution that guarantee such recovery.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Banerjee A, Merugu S, Dhillon IS, Ghosh J (2005) Clustering with Bregman divergences. J Mach Learn Res 6:1705–1749MathSciNetMATH Banerjee A, Merugu S, Dhillon IS, Ghosh J (2005) Clustering with Bregman divergences. J Mach Learn Res 6:1705–1749MathSciNetMATH
2.
Zurück zum Zitat Banerjee A, Merugu S, Dhillon I, and Ghosh J (2004) Clustering with Bregman divergences. In: Proceedings of the fourth SIAM international conference on data mining, pp 234–245 Banerjee A, Merugu S, Dhillon I, and Ghosh J (2004) Clustering with Bregman divergences. In: Proceedings of the fourth SIAM international conference on data mining, pp 234–245
3.
Zurück zum Zitat Beygelzimer A, Kephart J, and Rish I (2007) Evaluation of optimization methods for network bottleneck diagnosis. In: Proceedings of ICAC-07 Beygelzimer A, Kephart J, and Rish I (2007) Evaluation of optimization methods for network bottleneck diagnosis. In: Proceedings of ICAC-07
5.
Zurück zum Zitat Candes E, Romberg J (2006) Quantitative robust uncertainty principles and optimally sparse decompositions. Found Comput Math 6(2):227–254MathSciNetCrossRefMATH Candes E, Romberg J (2006) Quantitative robust uncertainty principles and optimally sparse decompositions. Found Comput Math 6(2):227–254MathSciNetCrossRefMATH
6.
Zurück zum Zitat Candes E, Romberg J, Tao T (2006) Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inf Theory 52(2):489–509MathSciNetCrossRefMATH Candes E, Romberg J, Tao T (2006) Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inf Theory 52(2):489–509MathSciNetCrossRefMATH
7.
Zurück zum Zitat Candes E, Romberg J, Tao T (2006) Stable signal recovery from incomplete and inaccurate measurements. Commun Pure Appl Math 59(8):1207–1223MathSciNetCrossRefMATH Candes E, Romberg J, Tao T (2006) Stable signal recovery from incomplete and inaccurate measurements. Commun Pure Appl Math 59(8):1207–1223MathSciNetCrossRefMATH
8.
Zurück zum Zitat Candes E, Tao T (2005) Decoding by linear programming. IEEE Trans Inf Theory 51(12):4203–4215 Candes E, Tao T (2005) Decoding by linear programming. IEEE Trans Inf Theory 51(12):4203–4215
9.
Zurück zum Zitat Carroll MK, Cecchi GA, Rish I, Garg R, Rao AR (2009) Prediction and interpretation of distributed neural activity with sparse models. Neuroimage 44(1):112–122 Carroll MK, Cecchi GA, Rish I, Garg R, Rao AR (2009) Prediction and interpretation of distributed neural activity with sparse models. Neuroimage 44(1):112–122
10.
Zurück zum Zitat Chandalia G, Rish I (2007) Blind source separation approach to performance diagnosis and dependency discovery. In: Proceedings of IMC-2007 Chandalia G, Rish I (2007) Blind source separation approach to performance diagnosis and dependency discovery. In: Proceedings of IMC-2007
12.
Zurück zum Zitat Donoho D (2006) For most large underdetermined systems of linear equations, the minimal ell-1 norm near-solution approximates the sparsest near-solution. Commun Pure Appl Math 59(7):907–934MathSciNetCrossRef Donoho D (2006) For most large underdetermined systems of linear equations, the minimal ell-1 norm near-solution approximates the sparsest near-solution. Commun Pure Appl Math 59(7):907–934MathSciNetCrossRef
13.
Zurück zum Zitat Donoho D (2006) For most large underdetermined systems of linear equations, the minimal ell-1 norm solution is also the sparsest solution. Commun Pure Appl Math 59(6):797–829 Donoho D (2006) For most large underdetermined systems of linear equations, the minimal ell-1 norm solution is also the sparsest solution. Commun Pure Appl Math 59(6):797–829
14.
Zurück zum Zitat Mitchell TM, Hutchinson R, Niculescu RS, Pereira F, Wang X, Just M, Newman S (2004) Learning to decode cognitive states from brain images. Mach Learn 57:145–175CrossRefMATH Mitchell TM, Hutchinson R, Niculescu RS, Pereira F, Wang X, Just M, Newman S (2004) Learning to decode cognitive states from brain images. Mach Learn 57:145–175CrossRefMATH
15.
Zurück zum Zitat Negahban S, Ravikumar P, Wainwright MJ, Yu B (2009) A unified framework for the analysis of regularized \(M\)-estimators. In: Proceedings of neural information processing systems (NIPS) Negahban S, Ravikumar P, Wainwright MJ, Yu B (2009) A unified framework for the analysis of regularized \(M\)-estimators. In: Proceedings of neural information processing systems (NIPS)
16.
Zurück zum Zitat Negahban S, Ravikumar P, Wainwright MJ, Yu B (2010) A unified framework for the analysis of regularized \(M\)-estimators. Technical Report 797, Department of Statistics, UC Berkeley Negahban S, Ravikumar P, Wainwright MJ, Yu B (2010) A unified framework for the analysis of regularized \(M\)-estimators. Technical Report 797, Department of Statistics, UC Berkeley
17.
Zurück zum Zitat Park Mee-Young, Hastie Trevor (2007) An L1 regularization-path algorithm for generalized linear models. JRSSB 69(4):659–677MathSciNetCrossRef Park Mee-Young, Hastie Trevor (2007) An L1 regularization-path algorithm for generalized linear models. JRSSB 69(4):659–677MathSciNetCrossRef
18.
Zurück zum Zitat Rish I, Brodie M, Ma S, Odintsova N, Beygelzimer A, Grabarnik G, Hernandez K (2005) Adaptive diagnosis in distributed systems. IEEE Trans Neural Networks (special issue on Adaptive learning systems in communication networks) 16(5):1088–1109 Rish I, Brodie M, Ma S, Odintsova N, Beygelzimer A, Grabarnik G, Hernandez K (2005) Adaptive diagnosis in distributed systems. IEEE Trans Neural Networks (special issue on Adaptive learning systems in communication networks) 16(5):1088–1109
19.
Zurück zum Zitat Rish I, Grabarnik G, (2009) Sparse signal recovery with Exponential-family noise. In: Proceedings of the 47-th annual allerton conference on communication, control and, computing Rish I, Grabarnik G, (2009) Sparse signal recovery with Exponential-family noise. In: Proceedings of the 47-th annual allerton conference on communication, control and, computing
20.
Zurück zum Zitat Rockafeller RT, (1970) Convex analysis. Princeton university press. New Jersey Rockafeller RT, (1970) Convex analysis. Princeton university press. New Jersey
21.
Zurück zum Zitat Zheng A, Rish I, Beygelzimer A (2005) Efficient test selection in active diagnosis via entropy approximation. In: Proceedings of UAI-05 Zheng A, Rish I, Beygelzimer A (2005) Efficient test selection in active diagnosis via entropy approximation. In: Proceedings of UAI-05
Metadaten
Titel
Sparse Signal Recovery with Exponential-Family Noise
verfasst von
Irina Rish
Genady Grabarnik
Copyright-Jahr
2014
Verlag
Springer Berlin Heidelberg
DOI
https://doi.org/10.1007/978-3-642-38398-4_3

Neuer Inhalt