Skip to main content
Log in

Compressive Hermite Interpolation: Sparse, High-Dimensional Approximation from Gradient-Augmented Measurements

  • Published:
Constructive Approximation Aims and scope

Abstract

We consider the sparse polynomial approximation of a multivariate function on a tensor product domain from samples of both the function and its gradient. When only function samples are prescribed, weighted \(\ell ^1\) minimization has recently been shown to be an effective procedure for computing such approximations. We extend this work to the gradient-augmented case. Our main results show that for the same asymptotic sample complexity, gradient-augmented measurements achieve an approximation error bound in a stronger Sobolev norm, as opposed to the \(L^2\)-norm in the unaugmented case. For Chebyshev and Legendre polynomial approximations, this sample complexity estimate is algebraic in the sparsity s and at most logarithmic in the dimension d, thus mitigating the curse of dimensionality to a substantial extent. We also present several experiments numerically illustrating the benefits of gradient information over an equivalent number of function samples only.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. This is not to be confused with expansions in Hermite polynomials, which we do not address in this paper. See [31] for some work in this direction.

  2. As we discuss in Sect. 6, we use a slightly different method of proof to remove the factor \(\lambda \) in the error bound, at the expense of a slightly increased log factor.

References

  1. Adcock, B.: Infinite-dimensional \(\ell ^1\) minimization and function approximation from pointwise data. Constr. Approx. 45(3), 345–390 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  2. Adcock, B.: Infinite-dimensional compressed sensing and function interpolation. Found. Comput. Math. 18(3), 661–701 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  3. Adcock, B., Bao, A., Brugiapaglia, S.: Correcting for unknown errors in sparse high-dimensional function approximation (2017). arXiv:1711.07622

  4. Adcock, B., Brugiapaglia, S.: Robustness to unknown error in sparse regularization. IEEE Trans. Inform. Theory 64(10), 6638–6661 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  5. Adcock, B., Brugiapaglia, S., Webster, C.G.: Compressed sensing approaches for polynomial approximation of high-dimensional functions. In: Boche, H., Caire, G., Calderbank, R., März, M., Kutyniok, G., Mathar, R. (eds.) Compressed Sensing and Its Applications, pp. 93–124. Birkhäuser, Berlin (2017)

    Chapter  Google Scholar 

  6. Aleseev, A.K., Navon, I.M., Zelentsov, M.E.: The estimation of functional uncertainty using polynomial chaos and adjoint equations. Int. J. Numer. Methods Fluids 67, 328–341 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  7. Bigot, J., Boyer, C., Weiss, P.: An analysis of block sampling strategies in compressed sensing. IEEE Trans. Inform. Theory 62(4), 2125–2139 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  8. Boyer, C., Bigot, J., Weiss, P.: Compressed sensing with structured sparsity and structured acquisition. Appl. Comput. Harmonic Anal. 46(2), 312–350 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  9. Candès, E.J., Wakin, M.B.: An introduction to compressive sampling. IEEE Signal Process. Mag. 25(2), 21–30 (2008)

    Article  Google Scholar 

  10. Chkifa, A., Cohen, A., Migliorati, G., Tempone, R.: Discrete least squares polynomial approximation with random evaluations-application to parametric and stochastic elliptic pdes. ESAIM Math. Model. Numer. Anal. 49(3), 815–837 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  11. Chkifa, A., Cohen, A., Schwab, C.: High-dimensional adaptive sparse polynomial interpolation and applications to parametric PDES. Found. Comput. Math. 14, 601–633 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  12. Chkifa, A., Cohen, G., Schwab, C.: Breaking the curse of dimensionality in sparse polynomial interpolation and applications to parametric pdes. J. Math. Pures Appl. 103, 400–428 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  13. Chkifa, A., Dexter, N., Tran, H., Webster, C.G.: Polynomial approximation via compressed sensing of high-dimensional functions on lower sets. Math. Comput. 87(311), 1415–1450 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  14. Chun, I.-Y., Adcock, B.: Compressed sensing and parallel acquisition. IEEE Trans. Inform. Theory 63(8), 4860–4882 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  15. Cohen, A., Davenport, M.A., Leviatan, D.: On the stability and accuracy of least squares approximations. Found. Comput. Math. 13, 819–834 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  16. Cohen, A., Migliorati, G.: Optimal weighted least-squares methods. SMAI J. Comput. Math. 3, 181–203 (2017)

    Article  MathSciNet  Google Scholar 

  17. Cohen, A., Migliorati, G.: Multivariate approximation in downward closed polynomial spaces. In: Dick, J., Kuo, F.Y., Woźniakowski, H. (eds.) Contemporary Computational Mathematics: A Celebration of the 80th Birthday of Ian Sloan, pp. 233–282. Springer, Berlin (2018)

    Chapter  Google Scholar 

  18. Constantine, P.G.: Active Subspaces: Emerging Ideas for Dimension Reduction in Parameter Studies. SIAM, New Delhi (2015)

    Book  MATH  Google Scholar 

  19. Foucart, S., Rauhut, H.: A Mathematical Introduction to Compressive Sensing. Birkhäuser, Berlin (2013)

    Book  MATH  Google Scholar 

  20. Gross, D.: Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Inform. Theory 57(3), 1548–1566 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  21. Guo, L., Narayan, A., Xiu, D., Zhou, T.: A gradient enhanced \(L^1\) mininization for sparse approximation of polynomial chaos expansions. J. Comput. Phys. 367, 49–64 (2018)

    Article  MathSciNet  Google Scholar 

  22. Hadigol, M., Doostan, A.: Least squares polynomial chaos expansion: a review of sampling strategies. Comput. Methods Appl. Mech. Eng. 332, 382–407 (2018)

    Article  MathSciNet  Google Scholar 

  23. Hampton, J., Doostan, A.: Compressive sampling of polynomial chaos expansions: convergence analysis and sampling strategies. J. Comput. Phys. 280, 363–386 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  24. Komkov, V., Choi, K.K., Haug, E.J.: Design Sensitivity Analysis of Structural Systems, vol. 177. Academic Presss, Cambridge (1986)

    MATH  Google Scholar 

  25. Li, Y., Anitescu, M., Roderick, O., Hickernell, F.: Orthogonal bases for polynomial regression with derivative information in uncertainty quantification. Int. J. Uncertain. Quantif. 1(4), 297–320 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  26. Lockwood, B., Mavriplis, D.: Gradient-based methods for uncertainty quantification in hypersonic flows. Comput. Fluids 85, 27–38 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  27. Migliorati, G.: Multivariate Markov-type and Nikolskii-type inequalities for polynomials associated with downward closed multi-index sets. J. Approx. Theory 189, 137–159 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  28. Migliorati, G., Nobile, F., von Schwerin, E., Tempone, R.: Approximation of quantities of interest in stochastic pdes by the random discrete \(l^2\) projection on polynomial spaces. SIAM J. Sci. Comput. 35(3), A1440–A1460 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  29. Migliorati, G., Nobile, F., von Schwerin, E., Tempone, R.: Analysis of the discrete \(l^2\) projection on polynomial spaces with random evaluations. Found. Comput. Math. 14, 419–456 (2014)

    MathSciNet  MATH  Google Scholar 

  30. Peng, J., Hampton, J., Doostan, A.: A weighted \(\ell _1\)-minimization approach for sparse polynomial chaos expansions. J. Comput. Phys. 267, 92–111 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  31. Peng, J., Hampton, J., Doostan, A.: On polynomial chaos expansion via gradient-enhanced \(\ell _1\)-minimization. J. Comput. Phys. 310, 440–458 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  32. Rauhut, H., Ward, R.: Sparse legendre expansions via \(\ell _1\)-minimization. J. Approx. Theory 164(5), 517–533 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  33. Rauhut, H., Ward, R.: Interpolation via weighted \(\ell _1\) minimization. Appl. Comput. Harmon. Anal. 40(2), 321–351 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  34. Seshadri, P., Narayan, A., Mahadevan, S.: Effectively subsampled quadratures for least squares polynomials approximations. SIAM/ASA J. Uncertain. Quantif. 5, 1003–1023 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  35. Szegö, G.: Orthogonal Polynomials. American Mathematical Society, Providence (1975)

    MATH  Google Scholar 

  36. Tang, G.: Methods for high dimensional uncertainty quantification: regularization, sensitivity analysis, and derivative enhancement. PhD thesis, Stanford University (2013)

  37. van den Berg, E., Friedlander, M.P.: SPGL1: A solver for large-scale sparse reconstruction (2007, June). http://www.cs.ubc.ca/~mpf/spgl1/. Accessed Jan 2016

  38. van den Berg, E., Friedlander, M.P.: Probing the pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput 31(2), 890–912 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  39. Xu, Z., Zhou, T.: A gradient enhanced \(L^1\) recovery for sparse Fourier expansions (Preprint) (2017)

  40. Yan, L., Guo, L., Xiu, D.: Stochastic collocation algorithms using \(\ell _1\)-minimization. Int. J. Uncertain. Quantif. 2(3), 279–293 (2012)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work is supported in part by the NSERC Grant 611675 and an Alfred P. Sloan Research Fellowship. Yi Sui also acknowledges support from an NSERC PGSD scholarship.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ben Adcock.

Additional information

Communicated by Karlheinz Groechenig.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Adcock, B., Sui, Y. Compressive Hermite Interpolation: Sparse, High-Dimensional Approximation from Gradient-Augmented Measurements. Constr Approx 50, 167–207 (2019). https://doi.org/10.1007/s00365-019-09467-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00365-019-09467-0

Keywords

Mathematics Subject Classification

Navigation