Skip to main content

Dimension-Independent Rates of Approximation by Neural Networks

  • Chapter
Computer Intensive Methods in Control and Signal Processing

Abstract

To characterize sets of functions that can be approximated by neural networks of various types with dimension-independent rates of approximation we introduce a new norm called variation with respect to a family of functions. We derive its basic properties and give upper estimates for functions satisfying certain integral equations. For a special case, variation with respect to characteristic functions of half-spaces, we give a characterization in terms of orthogonal flows throught layers corresponding to discretized hyperplanes. As a result we describe sets of functions that can be approximated with dimension-independent rates by sigmoidal perceptron networks.

Article Note

This work was partially supported by GA AV ČR,grants A2030602 and A2075606.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. H.N. Mhaskar and C.A. Micchelli, “Approximation by superposition of sigmoidal and radial basis functions”, Advances in Applied Mathematics, vol. 13, pp. 350–373, 1992.

    Article  MathSciNet  MATH  Google Scholar 

  2. M. Leshno, V. Lin, A. Pinkus, and S. Schocken, “Multilayer feedforward networks with a non-polynomial activation function can approximate any function”, Neural Networks, vol. 6, pp. 861–867, 1993.

    Article  Google Scholar 

  3. J. Park and I.W. Sandberg, “Approximation and radial-basis-function networks”, Neural Computation, vol. 5, pp. 305–316, 1993.

    Article  Google Scholar 

  4. R. DeVore, R. Howard, and C. Micchelli, “Optimal nonlinear approximation”, Manuscripta Mathematica, vol. 63, pp. 469–478, 1989.

    Article  MathSciNet  Google Scholar 

  5. T.J. Sejnowski and C. Rosenberg, “Parallel networks that learn to pronounce English text”, Complex Systems, vol. 1, pp. 145–168, 1987.

    MATH  Google Scholar 

  6. L.K. Jones, “A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training”, Annals of Statistics, vol. 20, pp. 608–613, 1992.

    Article  MathSciNet  MATH  Google Scholar 

  7. A.R. Barron, “Universal approximation bounds for superposition of a sigmoidal function”, IEEE Transactions on Information Theory,vol. 39, no. 3, pp. 930–945, 1993.

    Article  MathSciNet  MATH  Google Scholar 

  8. H.N. Mhaskar and C. A. Micchelli, “Dimension-independent bounds on the degree of approximation by neural networks”, IBM Journal of Research and Development, vol. 38, pp. 277–284, 1994.

    Article  MATH  Google Scholar 

  9. C. Darken, M. Donahue, L. Gurvits, and E. Sontag, “Rate of approximation results motivated by robust neural network learning”, in Proceedings of the 6th Annual ACM Conference on Computational Learning Theory,pp. 303–309. ACM, New York, 1993.

    Google Scholar 

  10. A.R. Barron, “Neural net approximation”, in Proceedings of the 7th Yale Workshop on Adaptive and Learning Systems, pp. 69–72. 1992.

    Google Scholar 

  11. F. Girosi, “Approximation error bounds that use VC-bounds”,in Proceedings of ICANN’96, pp. 295–302. EC & Cie, Paris, 1995.

    Google Scholar 

  12. V. Kůrková, P.C. Kainen, and V. Kreinovich, “Estimates of the number of hidden units and variation with respect to half-spaces”, Neural Networks, in press.

    Google Scholar 

  13. C.H. Edwards, Advanced calculus of several variables, Dover, New York, 1994.

    Google Scholar 

  14. F. Girosi, G. Anzellotti, “Rates of convergence for radial basis functions and neural networks”, in Artificial Neural Networks for Speech and Vision, pp. 97–113. Chapman & Hall, London, 1993.

    Google Scholar 

Download references

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer Science+Business Media New York

About this chapter

Cite this chapter

Kůrková, V. (1997). Dimension-Independent Rates of Approximation by Neural Networks. In: Kárný, M., Warwick, K. (eds) Computer Intensive Methods in Control and Signal Processing. Birkhäuser, Boston, MA. https://doi.org/10.1007/978-1-4612-1996-5_16

Download citation

  • DOI: https://doi.org/10.1007/978-1-4612-1996-5_16

  • Publisher Name: Birkhäuser, Boston, MA

  • Print ISBN: 978-1-4612-7373-8

  • Online ISBN: 978-1-4612-1996-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics