Skip to main content

2018 | OriginalPaper | Buchkapitel

2. Artificial Neural Networks

verfasst von : Paolo Massimo Buscema, Giulia Massini, Marco Breda, Weldon A. Lodwick, Francis Newman, Masoud Asadi-Zeydabadi

Erschienen in: Artificial Adaptive Systems Using Auto Contractive Maps

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Artificial Adaptive Systems include Artificial Neural Networks (ANNs or simply neural networks as they are commonly known). The philosophy of neural networks is to extract from data the underlying model that relates this data as an input/output (domain/range) pair. This is quite different from the way most mathematical modeling processes operate. Most mathematical modeling processes normally impose on the given data a model from which the input to output relationship is obtained. For example, a linear model that is a “best fit” in some sense, that relates the input to the output is such a model. What is imposed on the data by artificial neural networks is an a priori architecture rather than an a priori model. From the architecture, a model is extracted. It is clear, from any process that seeks to relate input to output (domain to range), requires a representation of the relationships among data. The advantage of imposing an architecture rather than a data model, is that it allows for the model to adapt. Fundamentally, a neural network is represented by its architecture. Thus, we look at the architecture first followed by a brief introduction of the two types of approaches for implementing the architecture—supervised and unsupervised neural networks. Recall that Auto-CM, which we discuss in Chap. 3, is an unsupervised ANN while K-CM, discussed in Chap. 6, is a supervised version of Auto-CM. However, in this chapter, we show that, in fact, supervised and unsupervised neural networks can be viewed within one framework in the case of the linear perceptron. The chapter ends with a brief look at some theoretical considerations.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Buscema, Massimo, and Pier Luigi Sacco. 2013. GUACAMOLE: A New Paradigm for Unsupervised Larning. In Data Mining and Applications Using Artificial Adaptive Systems, ed. Bill Tastle (pp 211–230). New York: Springer Science+Business Media. Buscema, Massimo, and Pier Luigi Sacco. 2013. GUACAMOLE: A New Paradigm for Unsupervised Larning. In Data Mining and Applications Using Artificial Adaptive Systems, ed. Bill Tastle (pp 211–230). New York: Springer Science+Business Media.
2.
Zurück zum Zitat Hornik, Kurt. 1989. Multilayer Feed Forward Networks are Universal Approximators. Neural Networks 2: 359–366. Hornik, Kurt. 1989. Multilayer Feed Forward Networks are Universal Approximators. Neural Networks 2: 359–366.
3.
Zurück zum Zitat Pineda, F.J. 1988. Generalization of Backpropagation to Recurrent and Higher Order Neural Networks. In Neural Information Processing Systems, ed. D.Z. Anderson, 602–611. New York: American Institute of Physics. Pineda, F.J. 1988. Generalization of Backpropagation to Recurrent and Higher Order Neural Networks. In Neural Information Processing Systems, ed. D.Z. Anderson, 602–611. New York: American Institute of Physics.
4.
Zurück zum Zitat Werbos, P. 1974. Beyond Regression—New Tools for Prediction and Analysis in the Behavioral Sciences. PhD thesis, Harvard University. Werbos, P. 1974. Beyond Regression—New Tools for Prediction and Analysis in the Behavioral Sciences. PhD thesis, Harvard University.
5.
Zurück zum Zitat Rumelhart, D., G. Hinton, and R. Williams. 1986. Learning Internal Representations by Error Propagation. In Rumelhart, McClelland, 318–362. Rumelhart, D., G. Hinton, and R. Williams. 1986. Learning Internal Representations by Error Propagation. In Rumelhart, McClelland, 318–362.
6.
Zurück zum Zitat Rumelhart, D.E., and J.L. McClelland (eds.). 1995. Parallel Distributed Processing, vol. 1, 318–362. Boston: The MIT Press; Y. Chauvin, and Rumelhart, D.E. 1995. Back Propagation. Theory, Architecture and Applications (Chapter 1). Hillsdale, New Jersey, USA: Lawrence Erlbaum Associates. Rumelhart, D.E., and J.L. McClelland (eds.). 1995. Parallel Distributed Processing, vol. 1, 318–362. Boston: The MIT Press; Y. Chauvin, and Rumelhart, D.E. 1995. Back Propagation. Theory, Architecture and Applications (Chapter 1). Hillsdale, New Jersey, USA: Lawrence Erlbaum Associates.
7.
Zurück zum Zitat Kohonen, Teuvo. 1982. Self-Organized Formation of Topologically Correct Feature Maps. Biological Cybernetics 43 (1): 59–69.MathSciNetCrossRefMATH Kohonen, Teuvo. 1982. Self-Organized Formation of Topologically Correct Feature Maps. Biological Cybernetics 43 (1): 59–69.MathSciNetCrossRefMATH
8.
Zurück zum Zitat Bengio, Y. 2009. Learning Deep Architectures for AI. Foundations and Trends in Machine Learning 2 (1): 1–127.CrossRefMATH Bengio, Y. 2009. Learning Deep Architectures for AI. Foundations and Trends in Machine Learning 2 (1): 1–127.CrossRefMATH
9.
Zurück zum Zitat Ko, Hanseok, and Robert H. Baran. 1994. Signal Detectability Enhancement with Auto-Associative Backpropagation Networks. Neurocomputing 6 (2): 219–236.CrossRef Ko, Hanseok, and Robert H. Baran. 1994. Signal Detectability Enhancement with Auto-Associative Backpropagation Networks. Neurocomputing 6 (2): 219–236.CrossRef
10.
Zurück zum Zitat Marcus, C.M., and R.M. Westervelt. 1989. Dynamics of Iterated-Map Neural Networks. Physical Review A 40 (1): 501–504.CrossRef Marcus, C.M., and R.M. Westervelt. 1989. Dynamics of Iterated-Map Neural Networks. Physical Review A 40 (1): 501–504.CrossRef
11.
Zurück zum Zitat Cichocki, A., and R. Unbehauen. 1993. Neural Networks for Optimization and Signal Processing. Chichester: Wiley.MATH Cichocki, A., and R. Unbehauen. 1993. Neural Networks for Optimization and Signal Processing. Chichester: Wiley.MATH
12.
Zurück zum Zitat Newman, F.D., and H. Cline. 1993. A neural network for optimizating radiation therapy dosage. In The International Conference on Numerical Analysis and Automatic Result Verification, February 1993. Newman, F.D., and H. Cline. 1993. A neural network for optimizating radiation therapy dosage. In The International Conference on Numerical Analysis and Automatic Result Verification, February 1993.
13.
Zurück zum Zitat Raff, U., and F.D. Newman. 1992. Automated Lesion Detection and Quantitation Using Autoassociate Memory. Medical Physics 19 (1): 71. Raff, U., and F.D. Newman. 1992. Automated Lesion Detection and Quantitation Using Autoassociate Memory. Medical Physics 19 (1): 71.
14.
Zurück zum Zitat Kohonen, T. 1989. Self Organization and Associate Memory. Berlin: Springer. Kohonen, T. 1989. Self Organization and Associate Memory. Berlin: Springer.
Metadaten
Titel
Artificial Neural Networks
verfasst von
Paolo Massimo Buscema
Giulia Massini
Marco Breda
Weldon A. Lodwick
Francis Newman
Masoud Asadi-Zeydabadi
Copyright-Jahr
2018
DOI
https://doi.org/10.1007/978-3-319-75049-1_2

Premium Partner