Skip to main content

Introduction to Optimization

  • Chapter
Nonlinear System Identification
  • 3408 Accesses

Abstract

This chapter introduces the topic of optimization. It outlines the key ideas in an intuitive fashion. Furthermore, the distinction between supervised and unsupervised learning is explained, and their strengths and weaknesses are discussed. For both learning approaches, the choice of the loss function and their implications are discussed in detail.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Note that taking the pth root in (2.3) just scales the absolute value of the loss function. It is important, however, to let (2.3) converge to the maximum error for p →.

  2. 2.

    Some “exotic” distributions (e.g., the Cauchy distribution) exist that do not meet the so-called Lindeberg condition, and therefore their sum is not asymptotically Gaussian distributed. But these exceptions are without practical significance in the context of this book.

References

  1. Draper, N.R., Smith, H.: Applied Regression Analysis. Probability and Mathematical Statistics. John Wiley & Sons, New York (1981)

    Google Scholar 

  2. Eykhoff, P.: System Identification. John Wiley & Sons, London (1974)

    MATH  Google Scholar 

  3. Isermann, R.: Identifikation dynamischer Syteme – Band 2, 2. ed. Springer, Berlin (1992)

    Book  Google Scholar 

  4. Peterka, V.: Bayesian approach to system identification. In: Eykhoff, P. (ed.) Trends and Progress in System Identification. Pergamon Press, Oxford (1991)

    Google Scholar 

  5. Plate, T.: Re: Kangaroos (Was Re: BackProp without Calculus?). Usenet article <93Sep8.162519edt.997@neuron.ai.toronto.edu> in comp.ai.neural-nets (1993)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2020 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Cite this chapter

Nelles, O. (2020). Introduction to Optimization. In: Nonlinear System Identification. Springer, Cham. https://doi.org/10.1007/978-3-030-47439-3_2

Download citation

Publish with us

Policies and ethics