Skip to main content
Top

2019 | OriginalPaper | Chapter

4. Perceptrons

Authors : Ke-Lin Du, M. N. S. Swamy

Published in: Neural Networks and Statistical Learning

Publisher: Springer London

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

This chapter introduces the simplest form of neural network—the perceptron. The perceptron has its historical position in the discipline of neural network and machine learning. One-neuron perceptron and single-layer perctron are described, together with various training methods.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Amin, M. F., & Murase, K. (2009). Single-layered complex-valued neural network for real-valued classification problems. Neurocomputing, 72, 945–955.CrossRef Amin, M. F., & Murase, K. (2009). Single-layered complex-valued neural network for real-valued classification problems. Neurocomputing, 72, 945–955.CrossRef
2.
go back to reference Amit, D. J., Wong, K. Y. M., & Campbell, C. (1989). Perceptron learning with sign-constrained weights. Journal of Physics A: Mathematical and General, 22, 2039–2045.MathSciNetCrossRef Amit, D. J., Wong, K. Y. M., & Campbell, C. (1989). Perceptron learning with sign-constrained weights. Journal of Physics A: Mathematical and General, 22, 2039–2045.MathSciNetCrossRef
3.
go back to reference Auer, P., Hebster, M., & Warmuth, M. K. (1996). Exponentially many local minima for single neurons. In D. S. Touretzky, M. C. Mozer, & M. E. Hasselmo (Eds.), Advances in neural information processing systems (Vol. 8, pp. 316–322). Cambridge, MA: MIT Press. Auer, P., Hebster, M., & Warmuth, M. K. (1996). Exponentially many local minima for single neurons. In D. S. Touretzky, M. C. Mozer, & M. E. Hasselmo (Eds.), Advances in neural information processing systems (Vol. 8, pp. 316–322). Cambridge, MA: MIT Press.
4.
go back to reference Auer, P., Burgsteiner, H., & Maass, W. (2008). A learning rule for very simple universal approximators consisting of a single layer of perceptrons. Neural Networks, 21, 786–795.MATHCrossRef Auer, P., Burgsteiner, H., & Maass, W. (2008). A learning rule for very simple universal approximators consisting of a single layer of perceptrons. Neural Networks, 21, 786–795.MATHCrossRef
5.
go back to reference Bolle, D., & Shim, G. M. (1995). Nonlinear Hebbian training of the perceptron. Network, 6, 619–633.MATHCrossRef Bolle, D., & Shim, G. M. (1995). Nonlinear Hebbian training of the perceptron. Network, 6, 619–633.MATHCrossRef
6.
go back to reference Bouboulis, P., & Theodoridis, S. (2011). Extension of Wirtinger’s calculus to reproducing kernel Hilbert spaces and the complex kernel LMS. IEEE Transactions on Signal Processing, 59(3), 964–978.MathSciNetMATHCrossRef Bouboulis, P., & Theodoridis, S. (2011). Extension of Wirtinger’s calculus to reproducing kernel Hilbert spaces and the complex kernel LMS. IEEE Transactions on Signal Processing, 59(3), 964–978.MathSciNetMATHCrossRef
7.
go back to reference Castillo, E., Fontenla-Romero, O., Alonso-Betanzos, A., & Guijarro-Berdinas, B. (2002). A global optimum approach for one-layer neural networks. Neural Computation, 14(6), 1429–1449.MATHCrossRef Castillo, E., Fontenla-Romero, O., Alonso-Betanzos, A., & Guijarro-Berdinas, B. (2002). A global optimum approach for one-layer neural networks. Neural Computation, 14(6), 1429–1449.MATHCrossRef
8.
go back to reference Cavallanti, G., Cesa-Bianchi, N., & Gentile, C. (2007). Tracking the best hyperplane with a simple budget perceptron. Machine Learning, 69, 143–167.CrossRef Cavallanti, G., Cesa-Bianchi, N., & Gentile, C. (2007). Tracking the best hyperplane with a simple budget perceptron. Machine Learning, 69, 143–167.CrossRef
9.
go back to reference Chen, J. L., & Chang, J. Y. (2000). Fuzzy perceptron neural networks for classifiers with numerical data and linguistic rules as inputs. IEEE Transactions on Fuzzy Systems, 8(6), 730–745.CrossRef Chen, J. L., & Chang, J. Y. (2000). Fuzzy perceptron neural networks for classifiers with numerical data and linguistic rules as inputs. IEEE Transactions on Fuzzy Systems, 8(6), 730–745.CrossRef
10.
go back to reference Crammer, K., Dekel, O., Shalev-Shwartz, S., & Singer, Y. (2005). Online passive aggressive algorithms. Journal of Machine Learning Research, 7, 551–585.MathSciNetMATH Crammer, K., Dekel, O., Shalev-Shwartz, S., & Singer, Y. (2005). Online passive aggressive algorithms. Journal of Machine Learning Research, 7, 551–585.MathSciNetMATH
11.
go back to reference Diene, O., & Bhaya, A. (2009). Perceptron training algorithms designed using discrete-time control Liapunov functions. Neurocomputing, 72, 3131–3137.CrossRef Diene, O., & Bhaya, A. (2009). Perceptron training algorithms designed using discrete-time control Liapunov functions. Neurocomputing, 72, 3131–3137.CrossRef
12.
go back to reference Duch, W. (2005). Uncertainty of data, fuzzy membership functions, and multilayer perceptrons. IEEE Transactions on Neural Networks, 16(1), 10–23.CrossRef Duch, W. (2005). Uncertainty of data, fuzzy membership functions, and multilayer perceptrons. IEEE Transactions on Neural Networks, 16(1), 10–23.CrossRef
13.
go back to reference Duda, R. O., & Hart, P. E. (1973). Pattern classification and scene analysis. New York: Wiley.MATH Duda, R. O., & Hart, P. E. (1973). Pattern classification and scene analysis. New York: Wiley.MATH
14.
go back to reference Eitzinger, C., & Plach, H. (2003). A new approach to perceptron training. IEEE Transactions on Neural Networks, 14(1), 216–221.CrossRef Eitzinger, C., & Plach, H. (2003). A new approach to perceptron training. IEEE Transactions on Neural Networks, 14(1), 216–221.CrossRef
15.
go back to reference Fernandez-Delgado, M., Ribeiro, J., Cernadas, E., & Ameneiro, S. B. (2011). Direct parallel perceptrons (DPPs): Fast analytical calculation of the parallel perceptrons weights with margin control for classification tasks. IEEE Transactions on Neural Networks, 22(11), 1837–1848.CrossRef Fernandez-Delgado, M., Ribeiro, J., Cernadas, E., & Ameneiro, S. B. (2011). Direct parallel perceptrons (DPPs): Fast analytical calculation of the parallel perceptrons weights with margin control for classification tasks. IEEE Transactions on Neural Networks, 22(11), 1837–1848.CrossRef
16.
go back to reference Fontenla-Romero, O., Guijarro-Berdinas, B., Perez-Sanchez, B., & Alonso-Betanzos, A. (2010). A new convex objective function for the supervised learning of single-layer neural networks. Pattern Recognition, 43(5), 1984–1992.MATHCrossRef Fontenla-Romero, O., Guijarro-Berdinas, B., Perez-Sanchez, B., & Alonso-Betanzos, A. (2010). A new convex objective function for the supervised learning of single-layer neural networks. Pattern Recognition, 43(5), 1984–1992.MATHCrossRef
17.
go back to reference Frean, M. (1992). A thermal perceptron learning rule. Neural Computation, 4(6), 946–957.CrossRef Frean, M. (1992). A thermal perceptron learning rule. Neural Computation, 4(6), 946–957.CrossRef
18.
go back to reference Freund, Y., & Schapire, R. (1999). Large margin classification using the perceptron algorithm. Machine Learning, 37, 277–296.MATHCrossRef Freund, Y., & Schapire, R. (1999). Large margin classification using the perceptron algorithm. Machine Learning, 37, 277–296.MATHCrossRef
19.
go back to reference Gallant, S. I. (1990). Perceptron-based learning algorithms. IEEE Transactions on Neural Networks, 1(2), 179–191.MathSciNetCrossRef Gallant, S. I. (1990). Perceptron-based learning algorithms. IEEE Transactions on Neural Networks, 1(2), 179–191.MathSciNetCrossRef
20.
go back to reference Gentile, C. (2001). A new approximate maximal margin classification algorithm. Journal of Machine Learning Research, 2, 213–242.MathSciNetMATH Gentile, C. (2001). A new approximate maximal margin classification algorithm. Journal of Machine Learning Research, 2, 213–242.MathSciNetMATH
21.
go back to reference Gori, M., & Maggini, M. (1996). Optimal convergence of on-line backpropagation. IEEE Transactions on Neural Networks, 7(1), 251–254.CrossRef Gori, M., & Maggini, M. (1996). Optimal convergence of on-line backpropagation. IEEE Transactions on Neural Networks, 7(1), 251–254.CrossRef
22.
go back to reference Hassoun, M. H., & Song, J. (1992). Adaptive Ho-Kashyap rules for perceptron training. IEEE Transactions on Neural Networks, 3(1), 51–61.CrossRef Hassoun, M. H., & Song, J. (1992). Adaptive Ho-Kashyap rules for perceptron training. IEEE Transactions on Neural Networks, 3(1), 51–61.CrossRef
23.
go back to reference Ho, Y. C., & Kashyap, R. L. (1965). An algorithm for linear inequalities and its applications. IEEE Transactions of Electronic Computers, 14, 683–688.MATHCrossRef Ho, Y. C., & Kashyap, R. L. (1965). An algorithm for linear inequalities and its applications. IEEE Transactions of Electronic Computers, 14, 683–688.MATHCrossRef
24.
go back to reference Ho, C. Y.-F., Ling, B. W.-K., Lam, H.-K., & Nasir, M. H. U. (2008). Global convergence and limit cycle behavior of weights of perceptron. IEEE Transactions on Neural Networks, 19(6), 938–947.CrossRef Ho, C. Y.-F., Ling, B. W.-K., Lam, H.-K., & Nasir, M. H. U. (2008). Global convergence and limit cycle behavior of weights of perceptron. IEEE Transactions on Neural Networks, 19(6), 938–947.CrossRef
25.
go back to reference Ho, C. Y.-F., Ling, B. W.-K., & Iu, H. H.-C. (2010). Invariant set of weight of perceptron trained by perceptron training algorithm. IEEE Transactions on Systems, Man, and Cybernetics Part B, 40(6), 1521–1530.CrossRef Ho, C. Y.-F., Ling, B. W.-K., & Iu, H. H.-C. (2010). Invariant set of weight of perceptron trained by perceptron training algorithm. IEEE Transactions on Systems, Man, and Cybernetics Part B, 40(6), 1521–1530.CrossRef
26.
go back to reference Khardon, R., & Wachman, G. (2007). Noise tolerant variants of the perceptron algorithm. Journal of Machine Learning Research, 8, 227–248.MATH Khardon, R., & Wachman, G. (2007). Noise tolerant variants of the perceptron algorithm. Journal of Machine Learning Research, 8, 227–248.MATH
27.
go back to reference Kivinen, J., Smola, A. J., & Williamson, R. C. (2004). Online learning with kernels. IEEE Transactions on Signal Processing, 52(8), 2165–2176.MathSciNetMATHCrossRef Kivinen, J., Smola, A. J., & Williamson, R. C. (2004). Online learning with kernels. IEEE Transactions on Signal Processing, 52(8), 2165–2176.MathSciNetMATHCrossRef
28.
go back to reference Krauth, W., & Mezard, M. (1987). Learning algorithms with optimal stability in neural networks. Journal of Physics A, 20(11), 745–752.MathSciNetCrossRef Krauth, W., & Mezard, M. (1987). Learning algorithms with optimal stability in neural networks. Journal of Physics A, 20(11), 745–752.MathSciNetCrossRef
29.
go back to reference Legenstein, R., & Maass, W. (2008). On the classification capability of sign-constrained perceptrons. Neural Computation, 20, 288–309.MathSciNetMATHCrossRef Legenstein, R., & Maass, W. (2008). On the classification capability of sign-constrained perceptrons. Neural Computation, 20, 288–309.MathSciNetMATHCrossRef
30.
go back to reference Li, Y., & Long, P. (2002). The relaxed online maximum margin algorithm. Machine Learning, 46, 361–387.MATHCrossRef Li, Y., & Long, P. (2002). The relaxed online maximum margin algorithm. Machine Learning, 46, 361–387.MATHCrossRef
31.
go back to reference Mansfield, A. J. (1991). Training perceptrons by linear programming. NPL Report DITC 181/91, National Physical Laboratory, Teddington, Middlesex, UK. Mansfield, A. J. (1991). Training perceptrons by linear programming. NPL Report DITC 181/91, National Physical Laboratory, Teddington, Middlesex, UK.
32.
go back to reference Maass, W., Natschlaeger, T., & Markram, H. (2002). Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation, 14(11), 2531–2560.MATHCrossRef Maass, W., Natschlaeger, T., & Markram, H. (2002). Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation, 14(11), 2531–2560.MATHCrossRef
33.
go back to reference Mays, C. H. (1963). Adaptive threshold logic. PhD thesis, Stanford University. Mays, C. H. (1963). Adaptive threshold logic. PhD thesis, Stanford University.
34.
go back to reference Muselli, M. (1997). On convergence properties of pocket algorithm. IEEE Transactions on Neural Networks, 8(3), 623–629.MathSciNetCrossRef Muselli, M. (1997). On convergence properties of pocket algorithm. IEEE Transactions on Neural Networks, 8(3), 623–629.MathSciNetCrossRef
35.
go back to reference Nagaraja, G., & Bose, R. P. J. C. (2006). Adaptive conjugate gradient algorithm for perceptron training. Neurocomputing, 69, 368–386.CrossRef Nagaraja, G., & Bose, R. P. J. C. (2006). Adaptive conjugate gradient algorithm for perceptron training. Neurocomputing, 69, 368–386.CrossRef
36.
go back to reference Panagiotakopoulos, C., & Tsampouka, P. (2011). The Margitron: A generalized perceptron with margin. IEEE Transactions on Neural Networks, 22(3), 395–407.MATHCrossRef Panagiotakopoulos, C., & Tsampouka, P. (2011). The Margitron: A generalized perceptron with margin. IEEE Transactions on Neural Networks, 22(3), 395–407.MATHCrossRef
37.
go back to reference Perantonis, S. J., & Virvilis, V. (2000). Efficient perceptron learning using constrained steepest descent. Neural Networks, 13(3), 351–364.CrossRef Perantonis, S. J., & Virvilis, V. (2000). Efficient perceptron learning using constrained steepest descent. Neural Networks, 13(3), 351–364.CrossRef
38.
go back to reference Rosenblatt, R. (1958). The Perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65, 386–408.CrossRef Rosenblatt, R. (1958). The Perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65, 386–408.CrossRef
39.
go back to reference Rosenblatt, R. (1962). Principles of neurodynamics. New York: Spartan Books.MATH Rosenblatt, R. (1962). Principles of neurodynamics. New York: Spartan Books.MATH
40.
go back to reference Rowcliffe, P., Feng, J., & Buxton, H. (2006). Spiking Perceptrons. IEEE Transactions on Neural Networks, 17(3), 803–807.CrossRef Rowcliffe, P., Feng, J., & Buxton, H. (2006). Spiking Perceptrons. IEEE Transactions on Neural Networks, 17(3), 803–807.CrossRef
41.
go back to reference Shalev-Shwartz, S. & Singer, Y. (2005). A new perspective on an old perceptron algorithm. In: Proceedings of the 16th Annual Conference on Computational Learning Theory (pp. 264–278). Shalev-Shwartz, S. & Singer, Y. (2005). A new perspective on an old perceptron algorithm. In: Proceedings of the 16th Annual Conference on Computational Learning Theory (pp. 264–278).
42.
go back to reference Sima, J. (2002). Training a single sigmoidal neuron Is hard. Neural Computation, 14, 2709–2728.MATHCrossRef Sima, J. (2002). Training a single sigmoidal neuron Is hard. Neural Computation, 14, 2709–2728.MATHCrossRef
43.
go back to reference Vallet, F. (1989). The Hebb rule for learning linearly separable Boolean functions: learning and generalisation. Europhysics Letters, 8(8), 747–751.CrossRef Vallet, F. (1989). The Hebb rule for learning linearly separable Boolean functions: learning and generalisation. Europhysics Letters, 8(8), 747–751.CrossRef
44.
go back to reference Werbos, P. J. (1990). Backpropagation through time: What it does and how to do it. Proceedings of the IEEE, 78(10), 1550–1560.CrossRef Werbos, P. J. (1990). Backpropagation through time: What it does and how to do it. Proceedings of the IEEE, 78(10), 1550–1560.CrossRef
45.
go back to reference Widrow, B. & Hoff, M. E. (1960). Adaptive switching circuits. In Record of IRE Eastern Electronic Show & Convention (WESCON) (Vol. 4, pp. 96–104). Widrow, B. & Hoff, M. E. (1960). Adaptive switching circuits. In Record of IRE Eastern Electronic Show & Convention (WESCON) (Vol. 4, pp. 96–104).
46.
go back to reference Widrow, B., & Lehr, M. A. (1990). 30 years of adaptive neural networks: Perceptron, Madaline, and backpropagation. Proceedings of the IEEE, 78(9), 1415–1442.CrossRef Widrow, B., & Lehr, M. A. (1990). 30 years of adaptive neural networks: Perceptron, Madaline, and backpropagation. Proceedings of the IEEE, 78(9), 1415–1442.CrossRef
Metadata
Title
Perceptrons
Authors
Ke-Lin Du
M. N. S. Swamy
Copyright Year
2019
Publisher
Springer London
DOI
https://doi.org/10.1007/978-1-4471-7452-3_4

Premium Partner