Skip to main content
Top
Published in:

30-03-2022

A Unified Approach to Uniform Signal Recovery From Nonlinear Observations

Authors: Martin Genzel, Alexander Stollenwerk

Published in: Foundations of Computational Mathematics | Issue 3/2023

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Recent advances in quantized compressed sensing and high-dimensional estimation have shown that signal recovery is even feasible under strong nonlinear distortions in the observation process. An important characteristic of associated guarantees is uniformity, i.e., recovery succeeds for an entire class of structured signals with a fixed measurement ensemble. However, despite significant results in various special cases, a general understanding of uniform recovery from nonlinear observations is still missing. This paper develops a unified approach to this problem under the assumption of i.i.d. sub-Gaussian measurement vectors. Our main result shows that a simple least-squares estimator with any convex constraint can serve as a universal recovery strategy, which is outlier robust and does not require explicit knowledge of the underlying nonlinearity. Based on empirical process theory, a key technical novelty is an approximative increment condition that can be implemented for all common types of nonlinear models. This flexibility allows us to apply our approach to a variety of problems in nonlinear compressed sensing and high-dimensional statistics, leading to several new and improved guarantees. Each of these applications is accompanied by a conceptually simple and systematic proof, which does not rely on any deeper properties of the observation model. On the other hand, known local stability properties can be incorporated into our framework in a plug-and-play manner, thereby implying near-optimal error bounds.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Footnotes
1
In particular, we will not discuss any details about possible algorithmic implementations of (\({\mathsf {P}}_{K,\varvec{y}}\)), which is an important subject in its own right.
 
2
Here, the \(\nu _i\) correspond to adversarial noise, i.e., as long as the \(\ell ^{2}\)-constraint is satisfied, they could be arbitrary (even worst-case) perturbations. This model is very common in uniform recovery and compressed sensing, cf. [21]. But note that compared to statistical noise, this comes at the price of consistent estimation, i.e., the reconstruction error may not become arbitrarily small if \(m \rightarrow \infty \); see Remark 1 and 4 for further discussion.
 
3
This is an important difference to the non-uniform regime, where \(\varvec{x}\) is a fixed signal. In this case, it is possible to apply known multiplier concentrations inequalities to control (1.3), e.g., see [42, Thm. 1.9].
 
4
Note that we actually have that \(L\ge \sqrt{1/\log (2)} > 1\) if \(\varvec{a}\) is isotropic, see [35].
 
5
One should bear in mind that the target function \(T\) is a purely theoretical object that does not affect the solution of (\({\mathsf {P}}_{K,\varvec{y}}\)) and may be (partially) unknown in practice. Nevertheless, it is an essential analysis tool for Problem 1, relating the underlying observation model to the actual recovery method (\({\mathsf {P}}_{K,\varvec{y}}\)), see also Remark 2(1).
 
6
The use of the modifier ‘\(\sim \)’ in Assumption 1 is also due to this fact and may help to distinguish between our observation model and the actual input for (\({\mathsf {P}}_{K,\varvec{y}}\)).
 
7
The condition (2.3) is stated in such a way that it is convenient to handle in our specific applications (see Sect. 4). However, for a basic understanding, the reader may simply set \(u= u_0\), so that the first and second branch in (2.3) can be merged to \(m \gtrsim L^2(\log L+ L^2\varDelta ^2) \cdot \big (w_{t}^2(K- T\mathcal {X})+ u^2 \big )\).
 
8
In the case of exact recovery, i.e., \(t = 0\), we follow the convention \(0 \cdot \infty {:=}0\), so that the condition (2.3) requires that \(r= \hat{r}= L_{t}= \hat{L}_{t}= 0\). Then, (2.3) is particularly fulfilled for \(m \ge C \cdot L^2 \log L\cdot (w_{0}^2(K- T\mathcal {X})+ u^2)\), where \(m_0= 0\), \(\varDelta = L^{-1} \sqrt{\log L}\), and \(u= u_0\).
 
9
Note that the expression \(O(m^{-1/2})\) suppresses the dependence on \(w_{t}(K- T\mathcal {X})\). Although the latter can be trivially bounded by \(w_{0}(K- T\mathcal {X})\), which is independent of t, such an estimate might not appropriately capture the (low) complexity of \(K\) in certain cases.
 
10
A simple example is as follows: For a unit vector \(\varvec{x}\in \mathbb {S}^{p-1}\) to be reconstructed, set \(\nu _i = c \cdot t \cdot \langle \varvec{a}_i, \varvec{x}\rangle \) for a constant \(c > 0\) small enough. If m is sufficiently large, then (3.8) holds with high probability. At the same time, we have that \(y_i = \langle \varvec{a}_i, \varvec{x}\rangle + \nu _i = \langle \varvec{a}_i, (1 + ct)\varvec{x}\rangle \). Hence, Corollary 1 can be also applied such that it certifies exact recovery of the rescaled vector \((1 + ct)\varvec{x}\) via (\({\mathsf {P}}_{K,\varvec{y}}\)). In other words, we have that \(\Vert \hat{\varvec{z}}- \varvec{x}\Vert _{2} = ct \cdot \Vert \varvec{x}\Vert _{2} \asymp t\).
 
11
Here, it is important to note that, in contrast to \(K\), the (transformed) signal set \(T\mathcal {X}\) may be highly non-convex.
 
12
The existence of an efficient projection and the (non-)convexity of \(K\) are not equivalent. There are many examples of efficient projections onto non-convex sets, while the projection onto convex sets can be NP-hard, e.g., see [53].
 
13
To be more precise, we assume that the tuples \((\varvec{a}_i, F_i, \tilde{y}_{t,i})\) are independent copies of \((\varvec{a}, F, \tilde{y}_{t})\) for \(i = 1, \dots , m\). In particular, the respective conditions of Assumption 2(a)–(c) are also satisfied for \(\{\tilde{y}_{t,i}(\varvec{x})\}_{\varvec{x}\in \mathcal {X}}\).
 
14
To be slightly more precise, we apply Theorem 8 to the index sets \(\mathcal {A}{:=}\mathcal {X}\) and \(\mathcal {B}{:=}K_{\mathcal {X},t}\), while \(X \sim \mathbb {P}\) corresponds to the random tuple \((\varvec{a}, F, \tilde{y}_{t})\).
 
Literature
1.
go back to reference Ai, A., Lapanowski, A., Plan, Y., Vershynin, R.: One-bit compressed sensing with non-Gaussian measurements. Linear Algebra Appl. 441, 222–239 (2014)MathSciNetMATH Ai, A., Lapanowski, A., Plan, Y., Vershynin, R.: One-bit compressed sensing with non-Gaussian measurements. Linear Algebra Appl. 441, 222–239 (2014)MathSciNetMATH
2.
go back to reference Amelunxen, D., Lotz, M., McCoy, M.B., Tropp, J.A.: Living on the edge: phase transitions in convex programs with random data. Inf. Inference 3(3), 224–294 (2014)MathSciNetMATH Amelunxen, D., Lotz, M., McCoy, M.B., Tropp, J.A.: Living on the edge: phase transitions in convex programs with random data. Inf. Inference 3(3), 224–294 (2014)MathSciNetMATH
3.
go back to reference Baraniuk, R.G., Foucart, S., Needell, D., Plan, Y., Wootters, M.: Exponential decay of reconstruction error from binary measurements of sparse signals. IEEE Trans. Inf. Theory 63(6), 3368–3385 (2017)MathSciNetMATH Baraniuk, R.G., Foucart, S., Needell, D., Plan, Y., Wootters, M.: Exponential decay of reconstruction error from binary measurements of sparse signals. IEEE Trans. Inf. Theory 63(6), 3368–3385 (2017)MathSciNetMATH
4.
go back to reference Bhandari, A., Krahmer, F., Raskar, R.: On unlimited sampling and reconstruction. IEEE Trans. Signal Process. 69, 3827–3839 (2020)MathSciNetMATH Bhandari, A., Krahmer, F., Raskar, R.: On unlimited sampling and reconstruction. IEEE Trans. Signal Process. 69, 3827–3839 (2020)MathSciNetMATH
5.
go back to reference Bora A, Jalal A, Price E, Dimakis AG. Compressed sensing using generative models. In: D. Precup, Y.W. Teh (eds.) Proceedings of the 34th International Conference on Machine Learning (ICML), 2017;70:537–546 Bora A, Jalal A, Price E, Dimakis AG. Compressed sensing using generative models. In: D. Precup, Y.W. Teh (eds.) Proceedings of the 34th International Conference on Machine Learning (ICML), 2017;70:537–546
6.
go back to reference Boufounos PT, Jacques L, Krahmer F, Saab R. Quantization and compressive sensing. In: H. Boche, R. Calderbank, G. Kutyniok, J. Vybiral (eds.) Compressed Sensing and its Applications: MATHEON Workshop 2013, Applied and Numerical Harmonic Analysis, pp. 193–237. Birkhäuser Cham 2015 Boufounos PT, Jacques L, Krahmer F, Saab R. Quantization and compressive sensing. In: H. Boche, R. Calderbank, G. Kutyniok, J. Vybiral (eds.) Compressed Sensing and its Applications: MATHEON Workshop 2013, Applied and Numerical Harmonic Analysis, pp. 193–237. Birkhäuser Cham 2015
7.
go back to reference Brillinger DR. A generalized linear model with “Gaussian” regressor variables. In: P.J. Bickel, K. Doksum, J. Hodges (eds.) A Festschrift For Erich L. Lehmann. Chapman and Hall/CRC 1982:pp. 97–114 Brillinger DR. A generalized linear model with “Gaussian” regressor variables. In: P.J. Bickel, K. Doksum, J. Hodges (eds.) A Festschrift For Erich L. Lehmann. Chapman and Hall/CRC 1982:pp. 97–114
8.
go back to reference Cai, J.F., Xu, W.: Guarantees of total variation minimization for signal recovery. Inf. Inference 4(4), 328–353 (2015)MathSciNetMATH Cai, J.F., Xu, W.: Guarantees of total variation minimization for signal recovery. Inf. Inference 4(4), 328–353 (2015)MathSciNetMATH
9.
go back to reference Candès, E.J., Eldar, Y.C., Strohmer, T., Voroninski, V.: Phase retrieval via matrix completion. SIAM J. Imaging Sci. 6(1), 199–225 (2013)MathSciNetMATH Candès, E.J., Eldar, Y.C., Strohmer, T., Voroninski, V.: Phase retrieval via matrix completion. SIAM J. Imaging Sci. 6(1), 199–225 (2013)MathSciNetMATH
10.
go back to reference Candès, E.J., Romberg, J.K., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2006)MathSciNetMATH Candès, E.J., Romberg, J.K., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2006)MathSciNetMATH
11.
go back to reference Candès, E.J., Romberg, J.K., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure Appl. Math. 59(8), 1207–1223 (2006)MathSciNetMATH Candès, E.J., Romberg, J.K., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure Appl. Math. 59(8), 1207–1223 (2006)MathSciNetMATH
12.
go back to reference Chandrasekaran, V., Recht, B., Parrilo, P.A., Willsky, A.S.: The convex geometry of linear inverse problems. Found. Comput. Math. 12(6), 805–849 (2012)MathSciNetMATH Chandrasekaran, V., Recht, B., Parrilo, P.A., Willsky, A.S.: The convex geometry of linear inverse problems. Found. Comput. Math. 12(6), 805–849 (2012)MathSciNetMATH
13.
go back to reference Dabeer, O., Karnik, A.: Signal parameter estimation using 1-bit dithered quantization. IEEE Trans. Inf. Theory 52(12), 5389–5405 (2006)MathSciNetMATH Dabeer, O., Karnik, A.: Signal parameter estimation using 1-bit dithered quantization. IEEE Trans. Inf. Theory 52(12), 5389–5405 (2006)MathSciNetMATH
14.
go back to reference Dirksen S. Quantized compressed sensing: A survey. In: H. Boche, G. Caire, R. Calderbank, G. Kutyniok, R. Mathar, P. Petersen (eds.) Compressed Sensing and Its Applications: Third International MATHEON Conference 2017, Applied and Numerical Harmonic Analysis, . Birkhäuser Cham 2019;67–95 Dirksen S. Quantized compressed sensing: A survey. In: H. Boche, G. Caire, R. Calderbank, G. Kutyniok, R. Mathar, P. Petersen (eds.) Compressed Sensing and Its Applications: Third International MATHEON Conference 2017, Applied and Numerical Harmonic Analysis, . Birkhäuser Cham 2019;67–95
15.
go back to reference Dirksen S, Genzel M, Jacques L, Stollenwerk A. The separation capacity of random neural networks 2021. Preprint 2108.00207 Dirksen S, Genzel M, Jacques L, Stollenwerk A. The separation capacity of random neural networks 2021. Preprint 2108.00207
16.
go back to reference Dirksen, S., Jung, H.C., Rauhut, H.: One-bit compressed sensing with partial Gaussian circulant matrices. Inf. Inference 9(3), 601–626 (2020)MathSciNetMATH Dirksen, S., Jung, H.C., Rauhut, H.: One-bit compressed sensing with partial Gaussian circulant matrices. Inf. Inference 9(3), 601–626 (2020)MathSciNetMATH
17.
go back to reference Dirksen S, Mendelson S. Non-Gaussian hyperplane tessellations and robust one-bit compressed sensing 2018. Preprint 1805.09409 Dirksen S, Mendelson S. Non-Gaussian hyperplane tessellations and robust one-bit compressed sensing 2018. Preprint 1805.09409
18.
go back to reference Dirksen S, Mendelson S. Robust one-bit compressed sensing with partial circulant matrices 2018. Preprint 1812.06719 Dirksen S, Mendelson S. Robust one-bit compressed sensing with partial circulant matrices 2018. Preprint 1812.06719
20.
go back to reference Foucart S. Flavors of compressive sensing. In: G.E. Fasshauer, L.L. Schumaker (eds.) Approximation Theory XV: San Antonio 2016, Springer Proceedings in Mathematics & Statistics, 2017;61–104 Foucart S. Flavors of compressive sensing. In: G.E. Fasshauer, L.L. Schumaker (eds.) Approximation Theory XV: San Antonio 2016, Springer Proceedings in Mathematics & Statistics, 2017;61–104
21.
go back to reference Foucart S, Rauhut H. A Mathematical Introduction to Compressive Sensing. Applied and Numerical Harmonic Analysis. Birkhäuser Basel 2013 Foucart S, Rauhut H. A Mathematical Introduction to Compressive Sensing. Applied and Numerical Harmonic Analysis. Birkhäuser Basel 2013
22.
go back to reference Genzel, M.: High-dimensional estimation of structured signals from non-linear observations with general convex loss functions. IEEE Trans. Inf. Theory 63(3), 1601–1619 (2017)MathSciNetMATH Genzel, M.: High-dimensional estimation of structured signals from non-linear observations with general convex loss functions. IEEE Trans. Inf. Theory 63(3), 1601–1619 (2017)MathSciNetMATH
23.
go back to reference Genzel M. The mismatch principle and \(\ell ^1\)-analysis compressed sensing: A unified approach to estimation under large model uncertainties and structural constraints. Phd thesis, available online: https://doi.org/10.14279/depositonce-8394, Technische Universität Berlin 2019 Genzel M. The mismatch principle and \(\ell ^1\)-analysis compressed sensing: A unified approach to estimation under large model uncertainties and structural constraints. Phd thesis, available online: https://​doi.​org/​10.​14279/​depositonce-8394, Technische Universität Berlin 2019
24.
go back to reference Genzel, M., Jung, P.: Recovering structured data from superimposed non-linear measurements. IEEE Trans. Inf. Theory 66(1), 453–477 (2020)MathSciNetMATH Genzel, M., Jung, P.: Recovering structured data from superimposed non-linear measurements. IEEE Trans. Inf. Theory 66(1), 453–477 (2020)MathSciNetMATH
25.
go back to reference Genzel M, Kipp C. Generic error bounds for the generalized Lasso with sub-exponential data 2020. Preprint 2004.05361 Genzel M, Kipp C. Generic error bounds for the generalized Lasso with sub-exponential data 2020. Preprint 2004.05361
26.
go back to reference Genzel, M., Kutyniok, G., März, M.: \(\ell ^1\)-analysis minimization and generalized (co-)sparsity: when does recovery succeed?. Appl. Comput. Harmon. Anal. 52, 82–140 (2021)MathSciNetMATH Genzel, M., Kutyniok, G., März, M.: \(\ell ^1\)-analysis minimization and generalized (co-)sparsity: when does recovery succeed?. Appl. Comput. Harmon. Anal. 52, 82–140 (2021)MathSciNetMATH
28.
go back to reference Genzel, M., Stollenwerk, A.: Robust 1-bit compressed sensing via hinge loss minimization. Inf. Inference 9(2), 361–422 (2020)MathSciNetMATH Genzel, M., Stollenwerk, A.: Robust 1-bit compressed sensing via hinge loss minimization. Inf. Inference 9(2), 361–422 (2020)MathSciNetMATH
29.
go back to reference Goldstein, L., Minsker, S., Wei, X.: Structured signal recovery from non-linear and heavy-tailed measurements. IEEE Trans. Inf. Theory 64(8), 5513–5530 (2018)MathSciNetMATH Goldstein, L., Minsker, S., Wei, X.: Structured signal recovery from non-linear and heavy-tailed measurements. IEEE Trans. Inf. Theory 64(8), 5513–5530 (2018)MathSciNetMATH
30.
go back to reference Gordon, Y.: On Milman’s inequality and random subspaces which escape through a mesh in \({\mathbb{R}}^n\). In: J. Lindenstrauss, V.D. Milman (eds.) Geometric Aspects of Functional Analysis, Springer Berlin Heidelberg, New York (1988) Gordon, Y.: On Milman’s inequality and random subspaces which escape through a mesh in \({\mathbb{R}}^n\). In: J. Lindenstrauss, V.D. Milman (eds.) Geometric Aspects of Functional Analysis, Springer Berlin Heidelberg, New York (1988)
31.
go back to reference Gray, R.M., Neuhoff, D.L.: Quantization. IEEE Trans. Inf. Theory 44(6), 2325–2383 (1998)MATH Gray, R.M., Neuhoff, D.L.: Quantization. IEEE Trans. Inf. Theory 44(6), 2325–2383 (1998)MATH
32.
go back to reference Gray, R.M., Stockham, T.G.: Dithered quantizers. IEEE Trans. Inf. Theory 39(3), 805–812 (1993)MATH Gray, R.M., Stockham, T.G.: Dithered quantizers. IEEE Trans. Inf. Theory 39(3), 805–812 (1993)MATH
33.
go back to reference Jacques, L., Cambareri, V.: Time for dithering: fast and quantized random embeddings via the restricted isometry property. Inf. Inference 6(4), 441–476 (2017)MathSciNetMATH Jacques, L., Cambareri, V.: Time for dithering: fast and quantized random embeddings via the restricted isometry property. Inf. Inference 6(4), 441–476 (2017)MathSciNetMATH
34.
go back to reference Jacques, L., Laska, J.N., Boufounos, P.T., Baraniuk, R.G.: Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors. IEEE Trans. Inf. Theory 59(4), 2082–2102 (2013)MathSciNetMATH Jacques, L., Laska, J.N., Boufounos, P.T., Baraniuk, R.G.: Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors. IEEE Trans. Inf. Theory 59(4), 2082–2102 (2013)MathSciNetMATH
35.
go back to reference Jeong H, Li X, Plan Y, Yılmaz Ö. Sub-gaussian matrices on sets: Optimal tail dependence and applications 2020. Preprint 2001.10631 Jeong H, Li X, Plan Y, Yılmaz Ö. Sub-gaussian matrices on sets: Optimal tail dependence and applications 2020. Preprint 2001.10631
36.
go back to reference Jung HC, Maly J, Palzer L, Stollenwerk A. Quantized compressed sensing by rectified linear units 2019. Preprint 1911.07816 Jung HC, Maly J, Palzer L, Stollenwerk A. Quantized compressed sensing by rectified linear units 2019. Preprint 1911.07816
37.
go back to reference Knudson, K., Saab, R., Ward, R.: One-bit compressive sensing with norm estimation. IEEE Trans. Inf. Theory 62(5), 2748–2758 (2016)MathSciNetMATH Knudson, K., Saab, R., Ward, R.: One-bit compressive sensing with norm estimation. IEEE Trans. Inf. Theory 62(5), 2748–2758 (2016)MathSciNetMATH
38.
go back to reference Krahmer F, Kruschel C, Sandbichler M. Total Variation Minimization in Compressed Sensing. In: H. Boche, G. Caire, R. Calderbank, M. März, G. Kutyniok, R. Mathar (eds.) Compressed Sensing and its Applications: Second International MATHEON Conference 2015,. Birkhäuser 2017;pp. 333–358 Krahmer F, Kruschel C, Sandbichler M. Total Variation Minimization in Compressed Sensing. In: H. Boche, G. Caire, R. Calderbank, M. März, G. Kutyniok, R. Mathar (eds.) Compressed Sensing and its Applications: Second International MATHEON Conference 2015,. Birkhäuser 2017;pp. 333–358
39.
go back to reference Liu Z, Scarlett J. The generalized lasso with nonlinear observations and generative priors 2020. Preprint 2006.12415 Liu Z, Scarlett J. The generalized lasso with nonlinear observations and generative priors 2020. Preprint 2006.12415
40.
go back to reference März M, Boyer C, Kahn J, Weiss P. Sampling rates for \(\ell ^1\)-synthesis 2020. Preprint 2004.07175 März M, Boyer C, Kahn J, Weiss P. Sampling rates for \(\ell ^1\)-synthesis 2020. Preprint 2004.07175
42.
go back to reference Mendelson, S.: Upper bounds on product and multiplier empirical processes. Stoch. Proc. Appl. 126(12), 3652–3680 (2016)MathSciNetMATH Mendelson, S.: Upper bounds on product and multiplier empirical processes. Stoch. Proc. Appl. 126(12), 3652–3680 (2016)MathSciNetMATH
43.
go back to reference Mendelson, S., Pajor, A., Tomczak-Jaegermann, N.: Reconstruction and subgaussian operators in asymptotic geometric analysis. Geom. Funct. Anal. 17(4), 1248–1282 (2007)MathSciNetMATH Mendelson, S., Pajor, A., Tomczak-Jaegermann, N.: Reconstruction and subgaussian operators in asymptotic geometric analysis. Geom. Funct. Anal. 17(4), 1248–1282 (2007)MathSciNetMATH
44.
go back to reference Moshtaghpour, A., Jacques, L., Cambareri, V., Degraux, K., De Vleeschouwer, C.: Consistent basis pursuit for signal and matrix estimates in quantized compressed sensing. IEEE Signal Process. Lett. 23(1), 25–29 (2016) Moshtaghpour, A., Jacques, L., Cambareri, V., Degraux, K., De Vleeschouwer, C.: Consistent basis pursuit for signal and matrix estimates in quantized compressed sensing. IEEE Signal Process. Lett. 23(1), 25–29 (2016)
45.
go back to reference Oymak S. Recht B. Near-optimal bounds for binary embeddings of arbitrary sets 2015. Preprint 1512.04433 Oymak S. Recht B. Near-optimal bounds for binary embeddings of arbitrary sets 2015. Preprint 1512.04433
46.
go back to reference Oymak, S., Recht, B., Soltanolkotabi, M.: Sharp time-data tradeoffs for linear inverse problems. IEEE Trans. Inf. Theory 64(6), 4129–4158 (2018)MathSciNetMATH Oymak, S., Recht, B., Soltanolkotabi, M.: Sharp time-data tradeoffs for linear inverse problems. IEEE Trans. Inf. Theory 64(6), 4129–4158 (2018)MathSciNetMATH
47.
go back to reference Oymak, S., Soltanolkotabi, M.: Fast and reliable parameter estimation from nonlinear observations. SIAM J. Optim. 27(4), 2276–2300 (2017)MathSciNetMATH Oymak, S., Soltanolkotabi, M.: Fast and reliable parameter estimation from nonlinear observations. SIAM J. Optim. 27(4), 2276–2300 (2017)MathSciNetMATH
48.
go back to reference Plan, Y., Vershynin, R.: One-bit compressed sensing by linear programming. Comm. Pure Appl. Math. 66(8), 1275–1297 (2013)MathSciNetMATH Plan, Y., Vershynin, R.: One-bit compressed sensing by linear programming. Comm. Pure Appl. Math. 66(8), 1275–1297 (2013)MathSciNetMATH
49.
go back to reference Plan, Y., Vershynin, R.: Robust 1-bit compressed sensing and sparse logistic regression: a convex programming approach. IEEE Trans. Inf. Theory 59(1), 482–494 (2013)MathSciNetMATH Plan, Y., Vershynin, R.: Robust 1-bit compressed sensing and sparse logistic regression: a convex programming approach. IEEE Trans. Inf. Theory 59(1), 482–494 (2013)MathSciNetMATH
50.
go back to reference Plan, Y., Vershynin, R.: Dimension reduction by random hyperplane tessellations. Discrete Comput. Geom. 51(2), 438–461 (2014)MathSciNetMATH Plan, Y., Vershynin, R.: Dimension reduction by random hyperplane tessellations. Discrete Comput. Geom. 51(2), 438–461 (2014)MathSciNetMATH
51.
go back to reference Plan, Y., Vershynin, R.: The generalized Lasso with non-linear observations. IEEE Trans. Inf. Theory 62(3), 1528–1537 (2016)MathSciNetMATH Plan, Y., Vershynin, R.: The generalized Lasso with non-linear observations. IEEE Trans. Inf. Theory 62(3), 1528–1537 (2016)MathSciNetMATH
52.
go back to reference Plan, Y., Vershynin, R., Yudovina, E.: High-dimensional estimation with geometric constraints. Inf. Inference 6(1), 1–40 (2016)MathSciNetMATH Plan, Y., Vershynin, R., Yudovina, E.: High-dimensional estimation with geometric constraints. Inf. Inference 6(1), 1–40 (2016)MathSciNetMATH
53.
go back to reference Richard, E., Obozinski, G.R., Vert, J.P.: Tight convex relaxations for sparse matrix factorization. Advances in Neural Information Processing Systems 27: 3284–3292 (2014) Richard, E., Obozinski, G.R., Vert, J.P.: Tight convex relaxations for sparse matrix factorization. Advances in Neural Information Processing Systems 27: 3284–3292 (2014)
54.
go back to reference Rudelson, M., Vershynin, R.: On sparse reconstruction from fourier and gaussian measurements. Comm. Pure Appl. Math. 61(8), 1025–1045 (2008)MathSciNetMATH Rudelson, M., Vershynin, R.: On sparse reconstruction from fourier and gaussian measurements. Comm. Pure Appl. Math. 61(8), 1025–1045 (2008)MathSciNetMATH
55.
go back to reference Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena 60(1–4), 259–268 (1992)MathSciNetMATH Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena 60(1–4), 259–268 (1992)MathSciNetMATH
56.
go back to reference Sattar, Y., Oymak, S.: Quickly finding the best linear model in high dimensions via projected gradient descent. IEEE Trans. Signal Process. 68, 818–829 (2020)MathSciNetMATH Sattar, Y., Oymak, S.: Quickly finding the best linear model in high dimensions via projected gradient descent. IEEE Trans. Signal Process. 68, 818–829 (2020)MathSciNetMATH
57.
go back to reference Stojnic M. Various thresholds for \(\ell _1\)-optimization in compressed sensing 2009. Preprint 0907.3666 Stojnic M. Various thresholds for \(\ell _1\)-optimization in compressed sensing 2009. Preprint 0907.3666
59.
go back to reference Talagrand, M.: Upper and Lower Bounds for Stochastic Processes, Ergebnisse der Mathematik und ihrer Grenzgebiete, vol. 3. Springer Berlin Heidelberg (2014) Talagrand, M.: Upper and Lower Bounds for Stochastic Processes, Ergebnisse der Mathematik und ihrer Grenzgebiete, vol. 3. Springer Berlin Heidelberg (2014)
60.
go back to reference Thrampoulidis C, Abbasi E, Hassibi B. LASSO with non-linear measurements is equivalent to one with linear measurements. In: C. Cortes, N.D. Lawrence, D.D. Lee, M. Sugiyama, R. Garnett (eds.) Advances in Neural Information Processing Systems 28, pp. 3402–3410. Curran Associates 2015 Thrampoulidis C, Abbasi E, Hassibi B. LASSO with non-linear measurements is equivalent to one with linear measurements. In: C. Cortes, N.D. Lawrence, D.D. Lee, M. Sugiyama, R. Garnett (eds.) Advances in Neural Information Processing Systems 28, pp. 3402–3410. Curran Associates 2015
61.
go back to reference Thrampoulidis C, Rawat AS. Lifting high-dimensional non-linear models with Gaussian regressors 2017. Preprint 1712.03638 Thrampoulidis C, Rawat AS. Lifting high-dimensional non-linear models with Gaussian regressors 2017. Preprint 1712.03638
62.
go back to reference Thrampoulidis, C., Rawat, A.S.: The generalized Lasso for sub-gaussian measurements with dithered quantization. IEEE Trans. Inf. Theory 66(4), 2487–2500 (2020)MathSciNetMATH Thrampoulidis, C., Rawat, A.S.: The generalized Lasso for sub-gaussian measurements with dithered quantization. IEEE Trans. Inf. Theory 66(4), 2487–2500 (2020)MathSciNetMATH
63.
go back to reference Tropp, J.A.: Convex recovery of a structured signal from independent random linear measurements. In: G.E. Pfander (ed.) Sampling Theory, a Renaissance, Applied and Numerical Harmonic Analysis, pp. 76–101. Birkhäuser Cham (2015) Tropp, J.A.: Convex recovery of a structured signal from independent random linear measurements. In: G.E. Pfander (ed.) Sampling Theory, a Renaissance, Applied and Numerical Harmonic Analysis, pp. 76–101. Birkhäuser Cham (2015)
64.
go back to reference Vershynin R. High-Dimensional Probability: An Introduction with Applications in Data Science, Cambridge Series in Statistical and Probabilistic Mathematics, vol. 47. Cambridge University Press 2018 Vershynin R. High-Dimensional Probability: An Introduction with Applications in Data Science, Cambridge Series in Statistical and Probabilistic Mathematics, vol. 47. Cambridge University Press 2018
65.
go back to reference Xu, C., Jacques, L.: Quantized compressive sensing with RIP matrices: the benefit of dithering. Inf. Inference 9(3), 543–586 (2020)MathSciNetMATH Xu, C., Jacques, L.: Quantized compressive sensing with RIP matrices: the benefit of dithering. Inf. Inference 9(3), 543–586 (2020)MathSciNetMATH
66.
go back to reference Yang Z, Balasubramanian K, Wang Z, Liu H. Estimating high-dimensional non-Gaussian multiple index models via Stein’s lemma. In: I. Guyon, U.V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, R. Garnett (eds.) Advances in Neural Information Processing Systems 30, pp. 6097–6106. Curran Associates 2017 Yang Z, Balasubramanian K, Wang Z, Liu H. Estimating high-dimensional non-Gaussian multiple index models via Stein’s lemma. In: I. Guyon, U.V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, R. Garnett (eds.) Advances in Neural Information Processing Systems 30, pp. 6097–6106. Curran Associates 2017
Metadata
Title
A Unified Approach to Uniform Signal Recovery From Nonlinear Observations
Authors
Martin Genzel
Alexander Stollenwerk
Publication date
30-03-2022
Publisher
Springer US
Published in
Foundations of Computational Mathematics / Issue 3/2023
Print ISSN: 1615-3375
Electronic ISSN: 1615-3383
DOI
https://doi.org/10.1007/s10208-022-09562-y

Premium Partner