Skip to main content
Top

2018 | OriginalPaper | Chapter

2. Artificial Neural Networks

Authors : Paolo Massimo Buscema, Giulia Massini, Marco Breda, Weldon A. Lodwick, Francis Newman, Masoud Asadi-Zeydabadi

Published in: Artificial Adaptive Systems Using Auto Contractive Maps

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Artificial Adaptive Systems include Artificial Neural Networks (ANNs or simply neural networks as they are commonly known). The philosophy of neural networks is to extract from data the underlying model that relates this data as an input/output (domain/range) pair. This is quite different from the way most mathematical modeling processes operate. Most mathematical modeling processes normally impose on the given data a model from which the input to output relationship is obtained. For example, a linear model that is a “best fit” in some sense, that relates the input to the output is such a model. What is imposed on the data by artificial neural networks is an a priori architecture rather than an a priori model. From the architecture, a model is extracted. It is clear, from any process that seeks to relate input to output (domain to range), requires a representation of the relationships among data. The advantage of imposing an architecture rather than a data model, is that it allows for the model to adapt. Fundamentally, a neural network is represented by its architecture. Thus, we look at the architecture first followed by a brief introduction of the two types of approaches for implementing the architecture—supervised and unsupervised neural networks. Recall that Auto-CM, which we discuss in Chap. 3, is an unsupervised ANN while K-CM, discussed in Chap. 6, is a supervised version of Auto-CM. However, in this chapter, we show that, in fact, supervised and unsupervised neural networks can be viewed within one framework in the case of the linear perceptron. The chapter ends with a brief look at some theoretical considerations.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Buscema, Massimo, and Pier Luigi Sacco. 2013. GUACAMOLE: A New Paradigm for Unsupervised Larning. In Data Mining and Applications Using Artificial Adaptive Systems, ed. Bill Tastle (pp 211–230). New York: Springer Science+Business Media. Buscema, Massimo, and Pier Luigi Sacco. 2013. GUACAMOLE: A New Paradigm for Unsupervised Larning. In Data Mining and Applications Using Artificial Adaptive Systems, ed. Bill Tastle (pp 211–230). New York: Springer Science+Business Media.
2.
go back to reference Hornik, Kurt. 1989. Multilayer Feed Forward Networks are Universal Approximators. Neural Networks 2: 359–366. Hornik, Kurt. 1989. Multilayer Feed Forward Networks are Universal Approximators. Neural Networks 2: 359–366.
3.
go back to reference Pineda, F.J. 1988. Generalization of Backpropagation to Recurrent and Higher Order Neural Networks. In Neural Information Processing Systems, ed. D.Z. Anderson, 602–611. New York: American Institute of Physics. Pineda, F.J. 1988. Generalization of Backpropagation to Recurrent and Higher Order Neural Networks. In Neural Information Processing Systems, ed. D.Z. Anderson, 602–611. New York: American Institute of Physics.
4.
go back to reference Werbos, P. 1974. Beyond Regression—New Tools for Prediction and Analysis in the Behavioral Sciences. PhD thesis, Harvard University. Werbos, P. 1974. Beyond Regression—New Tools for Prediction and Analysis in the Behavioral Sciences. PhD thesis, Harvard University.
5.
go back to reference Rumelhart, D., G. Hinton, and R. Williams. 1986. Learning Internal Representations by Error Propagation. In Rumelhart, McClelland, 318–362. Rumelhart, D., G. Hinton, and R. Williams. 1986. Learning Internal Representations by Error Propagation. In Rumelhart, McClelland, 318–362.
6.
go back to reference Rumelhart, D.E., and J.L. McClelland (eds.). 1995. Parallel Distributed Processing, vol. 1, 318–362. Boston: The MIT Press; Y. Chauvin, and Rumelhart, D.E. 1995. Back Propagation. Theory, Architecture and Applications (Chapter 1). Hillsdale, New Jersey, USA: Lawrence Erlbaum Associates. Rumelhart, D.E., and J.L. McClelland (eds.). 1995. Parallel Distributed Processing, vol. 1, 318–362. Boston: The MIT Press; Y. Chauvin, and Rumelhart, D.E. 1995. Back Propagation. Theory, Architecture and Applications (Chapter 1). Hillsdale, New Jersey, USA: Lawrence Erlbaum Associates.
7.
8.
go back to reference Bengio, Y. 2009. Learning Deep Architectures for AI. Foundations and Trends in Machine Learning 2 (1): 1–127.CrossRefMATH Bengio, Y. 2009. Learning Deep Architectures for AI. Foundations and Trends in Machine Learning 2 (1): 1–127.CrossRefMATH
9.
go back to reference Ko, Hanseok, and Robert H. Baran. 1994. Signal Detectability Enhancement with Auto-Associative Backpropagation Networks. Neurocomputing 6 (2): 219–236.CrossRef Ko, Hanseok, and Robert H. Baran. 1994. Signal Detectability Enhancement with Auto-Associative Backpropagation Networks. Neurocomputing 6 (2): 219–236.CrossRef
10.
go back to reference Marcus, C.M., and R.M. Westervelt. 1989. Dynamics of Iterated-Map Neural Networks. Physical Review A 40 (1): 501–504.CrossRef Marcus, C.M., and R.M. Westervelt. 1989. Dynamics of Iterated-Map Neural Networks. Physical Review A 40 (1): 501–504.CrossRef
11.
go back to reference Cichocki, A., and R. Unbehauen. 1993. Neural Networks for Optimization and Signal Processing. Chichester: Wiley.MATH Cichocki, A., and R. Unbehauen. 1993. Neural Networks for Optimization and Signal Processing. Chichester: Wiley.MATH
12.
go back to reference Newman, F.D., and H. Cline. 1993. A neural network for optimizating radiation therapy dosage. In The International Conference on Numerical Analysis and Automatic Result Verification, February 1993. Newman, F.D., and H. Cline. 1993. A neural network for optimizating radiation therapy dosage. In The International Conference on Numerical Analysis and Automatic Result Verification, February 1993.
13.
go back to reference Raff, U., and F.D. Newman. 1992. Automated Lesion Detection and Quantitation Using Autoassociate Memory. Medical Physics 19 (1): 71. Raff, U., and F.D. Newman. 1992. Automated Lesion Detection and Quantitation Using Autoassociate Memory. Medical Physics 19 (1): 71.
14.
go back to reference Kohonen, T. 1989. Self Organization and Associate Memory. Berlin: Springer. Kohonen, T. 1989. Self Organization and Associate Memory. Berlin: Springer.
Metadata
Title
Artificial Neural Networks
Authors
Paolo Massimo Buscema
Giulia Massini
Marco Breda
Weldon A. Lodwick
Francis Newman
Masoud Asadi-Zeydabadi
Copyright Year
2018
DOI
https://doi.org/10.1007/978-3-319-75049-1_2

Premium Partner