ABSTRACT
Super-resolution is a fundamental task in imaging, where the goal is to extract fine-grained structure from coarse-grained measurements. Here we are interested in a popular mathematical abstraction of this problem that has been widely studied in the statistics, signal processing and machine learning communities. We exactly resolve the threshold at which noisy super-resolution is possible. In particular, we establish a sharp phase transition for the relationship between the cutoff frequency (m) and the separation (Δ). If m > 1/Δ + 1, our estimator converges to the true values at an inverse polynomial rate in terms of the magnitude of the noise. And when m < (1-ε) /Δ no estimator can distinguish between a particular pair of Δ-separated signals even if the magnitude of the noise is exponentially small. Our results involve making novel connections between extremal functions and the spectral properties of Vandermonde matrices. We establish a sharp phase transition for their condition number which in turn allows us to give the first noise tolerance bounds for the matrix pencil method. Moreover we show that our methods can be interpreted as giving preconditioners for Vandermonde matrices, and we use this observation to design faster algorithms for super-resolution. We believe that these ideas may have other applications in designing faster algorithms for other basic tasks in signal processing.
- M. Belkin and K. Sinha. Polynomial learning of distribution families. In FOCS, pages 103--112, 2010. Google ScholarDigital Library
- A. Beurling. Interpolation for an interval on R1. (Mittag-Leffler Lectures on Harmonic Analysis) Collected Works of Arne Buerling: Volume 2, Harmonic Analysis, pages 351--359, 1989.Google Scholar
- B. Bhaskar, G. Tang and B. Recht. Compressed sensing off the grid. arXiv:1207.6053, 2012.Google Scholar
- A. Bhaskara, M. Charikar, A. Moitra and A. Vijayaraghavan. Smoothed analysis of tensor decompositions. In STOC, 2014. Google ScholarDigital Library
- E. Candes and C. Fernandez-Granda. Towards a mathematical theory of super-resolution. Comm. of Pure and Applied Math., to appear.Google Scholar
- E. Candes and C. Fernandez-Granda. Super-resolution from noisy data. J. of Fourier Analysis and Applications, pages 1229--1254, 2013.Google Scholar
- E. Candes, J. Romberg and T. Tao. Robust uncertainty principles: exat signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, pages 489--509, 2006. Google ScholarDigital Library
- E. Candes and T. Tao. Decoding by linear programming. IEEE Trans. on Information Theory, 51(12):4203--4215, 2005. Google ScholarDigital Library
- V. Chandrasekaran, B. Recht, P. Parrilo and A. Willsky. The convex geometry of linear inverse problems. Foundations of Computational Mathematics, pages 805--849, 2012.Google Scholar
- E. Cheney. Introduction to Approximation Theory. American Mathematical Society, 2000.Google Scholar
- D. Donoho. Superresolution via sparsity constraints. SIAM Journal on Mathematical Analysis, pages 1309--1331, 1992. Google ScholarDigital Library
- D. Donoho and B. Logan. Signal recovery and the large sieve inequality. SIAM Journal of Applied Math, pass 577--591, 1992. Google ScholarDigital Library
- D. Donoho and P. Stark. Uncertainty principles and signal recovery. In SIAM J. on Appl. Math, pages 906--931, 1999. Google ScholarDigital Library
- B. G. R. de Prony. Essay experimental et analytique: sur les lois de la dilatabilite de fluides elastique et sur celles de la force expansive de la vapeur de l'alcool, a differentes temperatures. Journal de l'ecole Polytechnique, pages 24--76, 1795.Google Scholar
- K. Do Ba, P. Indyk, E. Price and D. Woodruff. Lower bounds for sparse recovery. In SODA, pages 1190--1197, 2010. Google ScholarDigital Library
- M. Elad. Sparse and Redundant Representations. Springer, 2010. Google ScholarCross Ref
- C. Fernandez-Granda. Support detection in super-resolution. International Conference on Sampling Theory and Appl., pages 145--148, 2013.Google Scholar
- A. Gilbert, S. Guha, P. Indyk, M. Muthukrishnan and M. Strauss. Near optimal sparse Fourier representations via sampling. In STOC, pages 152--161, 2002. Google ScholarDigital Library
- A. Gilbert, M. Muthukrishnan and M. Strauss. Improved time bounds for near-optimal sparse Fourier representations. SPIE 5914, Wavelets XI, 2005.Google ScholarCross Ref
- N. Goyal, S. Vempala and Y. Xiao. Fourier PCA. In STOC, pages 584--593, 2014. Google ScholarDigital Library
- H. Hassanieh, P. Indyk, D. Kitabi and E. Price. Nearly optimal sparse Fourier transform. In STOC, pages 563--578, 2012. Google ScholarDigital Library
- R. Horn and C. Johnson. Matrix Analysis. Cambridge University Press, 1990. Google ScholarDigital Library
- Y. Hua and T. Sarkar. Matrix pencil method for estimating parameters of exponentially damped/undamped sinusoids in noise. IEEE Trans. on Acoustics, Speech and Signal Processing, pages 814--824, 1990.Google Scholar
- A. T. Kalai, A. Moitra, and G. Valiant. Efficiently learning mixtures of two gaussians. In STOC, pages 553--562, 2010. Google ScholarDigital Library
- W. Liao and A. Fannjiang. MUSIC for single-snapshot spectral estimation: stability and super-resolution. arXiv:1404.1484, 2014.Google Scholar
- A. Moitra and M. Saks. A polynomial time algorithm for lossy population recovery. In FOCS, pages 110--116, 2013. Google ScholarDigital Library
- H. Montgomery. Ten Lectures on the Interface Between Analytic Number Theory and Harmonic Analysis. American Mathematical Society, 1994.Google ScholarCross Ref
- A. Moitra and G. Valiant. Setting the polynomial learnability of mixtures of gaussians. In FOCS, pages 93--102, 2010. Google ScholarDigital Library
- H. Montgomery and R. Vaughan. Hilbert's inequality. Journal of the London Math. Society, pages 43--81, 1974.Google Scholar
- S. Park, M. Park and M. Kang. Super-resolution image reconstruction: a technical overview. Signal Processing Magazine, pages 21--36, 2003.Google Scholar
- V. Pisarenko. The retrieval of harmonics from a covariance function. Geophysical Journal of the Royal Astronomical Society, pages 347--366, 1973.Google ScholarCross Ref
- The Royal Swedish Academy of Sciences. The Nobel Prize in Chemistry 2014.\newline http://www.nobelprize.org/nobel_prizes/chemistry/laureates/2014/press.htmlGoogle Scholar
- A. Selberg. Collected Papers. Springer-Verlag, 1991.Google Scholar
- D. Slepian and H. Pollack. Prolate spheroidal wave functions, Fourier analysis and uncertainty -- I, II, III, IV, V Bell System Technical Journal, pages 43--84, 1961, pages 1295--1336, 1962, pages 3009--3058, 1964 and pages 1371--1430, 1978.Google Scholar
- G. Stewart. Gershgorin theory for the generalized eigenvalue problem Ax = lambda Bx. Mathematics of Computation, pages 600--606, 1975.Google ScholarCross Ref
- G. Stewart and J. Sun. Matrix Perturbation Theory. Academic Press Inc., 1990.Google Scholar
- P. Stoica. List of references on spectral line analysis. Signal Processing, pages 329--340, 1993. Google ScholarDigital Library
- G. Tang , B. Bhaskar, and B. Recht. Sparse recovery over continuous dictionaries: Just discretize. Asilomar Conference on Signals, Systems and Computers, 2013.Google ScholarCross Ref
- A. Timan. Theory of Approximation of Functions of a Real Variable. Pergamon Press-Macmillan, 1963.Google Scholar
- J. Vaaler. Some extremal functions in Fourier analysis. Bulletin of the American Mathematical Soc., pages 183--216, 1985.Google ScholarCross Ref
- A. Zygmund. Trigonometric Series. Cambridge University Press, 1968.Google Scholar
Index Terms
- Super-resolution, Extremal Functions and the Condition Number of Vandermonde Matrices
Recommendations
Algorithmic foundations for the diffraction limit
STOC 2021: Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of ComputingFor more than a century and a half it has been widely-believed (but was never rigorously shown) that the physics of diffraction imposes certain fundamental limits on the resolution of an optical system. However our understanding of what exactly can and ...
Understanding deep learning (still) requires rethinking generalization
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small gap between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family or to the ...
Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics
STOC 2019: Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of ComputingAn n-qubit quantum circuit performs a unitary operation on an exponentially large, 2n-dimensional, Hilbert space, which is a major source of quantum speed-ups. We develop a new “Quantum singular value transformation” algorithm that can directly harness ...
Comments