Skip to main content
Top
Published in: Neural Processing Letters 6/2021

12-07-2021

Correlation Projection for Analytic Learning of a Classification Network

Authors: Huiping Zhuang, Zhiping Lin, Kar-Ann Toh

Published in: Neural Processing Letters | Issue 6/2021

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

In this paper, we propose a correlation projection network (CPNet) that determines its parameters analytically for pattern classification. This network consists of multiple modules with each module containing two layers. We first introduce a label encoding process for each module to facilitate a locally supervised learning. Subsequently, in each module, the first layer conducts what we call the correlation projection process for feature extraction. The second layer determines its parameters analytically through solving a least squares problem. By introducing a corresponding label decoding process, the proposed CPNet achieves a multi-exit structure which is the first of its kind in multilayer analytic learning. Due to the analytic learning technique, the proposed method only needs to visit the dataset once, and is hence significantly faster than the commonly used backpropagation, as verified in the experiments. We also conduct classification tasks on various benchmark datasets which demonstrate competitive results compared with several state-of-the-arts.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Barton SA (1991) A matrix method for optimizing a neural network. Neural Comput 3(3):450–459CrossRef Barton SA (1991) A matrix method for optimizing a neural network. Neural Comput 3(3):450–459CrossRef
3.
go back to reference Belilovsky E, Eickenberg M, Oyallon E (2019) Greedy layerwise learning can scale to imagenet. In: International conference on machine learning, pp 583–593 Belilovsky E, Eickenberg M, Oyallon E (2019) Greedy layerwise learning can scale to imagenet. In: International conference on machine learning, pp 583–593
4.
go back to reference Ben-Israel A, Greville TNE (2003) Generalized inverses: theory and applications, 2nd edn. Springer, New YorkMATH Ben-Israel A, Greville TNE (2003) Generalized inverses: theory and applications, 2nd edn. Springer, New YorkMATH
5.
go back to reference Chan T, Jia K, Gao S, Lu J, Zeng Z, Ma Y (2015) PCANet: a simple deep learning baseline for image classification? IEEE Trans Image Process 24(12):5017–5032MathSciNetCrossRef Chan T, Jia K, Gao S, Lu J, Zeng Z, Ma Y (2015) PCANet: a simple deep learning baseline for image classification? IEEE Trans Image Process 24(12):5017–5032MathSciNetCrossRef
6.
go back to reference Chen S, Grant P, Cowan C (1992) Orthogonal least-squares algorithm for training multioutput radial basis function networks. In: IEE proceedings of radar and signal processing, vol 139. IET, pp 378–384 Chen S, Grant P, Cowan C (1992) Orthogonal least-squares algorithm for training multioutput radial basis function networks. In: IEE proceedings of radar and signal processing, vol 139. IET, pp 378–384
7.
go back to reference Dawson M, Olvera J, Fung A, Manry M (1992) Inversion of surface parameters using fast learning neural networks. In: [Proceedings] IGARSS’92 international geoscience and remote sensing symposium, vol 2. IEEE, pp 910–912 Dawson M, Olvera J, Fung A, Manry M (1992) Inversion of surface parameters using fast learning neural networks. In: [Proceedings] IGARSS’92 international geoscience and remote sensing symposium, vol 2. IEEE, pp 910–912
8.
go back to reference Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT press Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT press
9.
go back to reference Guang-Bin H, Qin-Yu Z, Chee-Kheong S (2004) Extreme learning machine: a new learning scheme of feedforward neural networks. In: 2004 IEEE international joint conference on neural networks, vol 2 Guang-Bin H, Qin-Yu Z, Chee-Kheong S (2004) Extreme learning machine: a new learning scheme of feedforward neural networks. In: 2004 IEEE international joint conference on neural networks, vol 2
10.
go back to reference Guo Ping C.P.C, Sun Y (1995) An exact supervised learning for a three-layer supervised neural network. In: Proceedings of international conference on neural information processing. IEEE Guo Ping C.P.C, Sun Y (1995) An exact supervised learning for a three-layer supervised neural network. In: Proceedings of international conference on neural information processing. IEEE
11.
go back to reference Guo P, Lyu MR (2004) A pseudoinverse learning algorithm for feedforward neural networks with stacked generalization applications to software reliability growth data. Neurocomputing 56:101–121CrossRef Guo P, Lyu MR (2004) A pseudoinverse learning algorithm for feedforward neural networks with stacked generalization applications to software reliability growth data. Neurocomputing 56:101–121CrossRef
12.
go back to reference Guo P, Lyu M.R, Mastorakis N (2001) Pseudoinverse learning algorithm for feedforward neural networks. Adv Neural Netw Appl 321–326 Guo P, Lyu M.R, Mastorakis N (2001) Pseudoinverse learning algorithm for feedforward neural networks. Adv Neural Netw Appl 321–326
13.
14.
go back to reference Hao WL, Zhang Z (2016) Incremental pcanet: a lifelong learning framework to achieve the plasticity of both feature and classifier constructions. In: International conference on brain inspired cognitive systems Hao WL, Zhang Z (2016) Incremental pcanet: a lifelong learning framework to achieve the plasticity of both feature and classifier constructions. In: International conference on brain inspired cognitive systems
15.
go back to reference He K, Peng Y, Liu S, Li J (2020) Regularized negative label relaxation least squares regression for face recognition. Neural Process Lett 1–19 He K, Peng Y, Liu S, Li J (2020) Regularized negative label relaxation least squares regression for face recognition. Neural Process Lett 1–19
16.
go back to reference He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: The IEEE conference on computer vision and pattern recognition He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: The IEEE conference on computer vision and pattern recognition
17.
go back to reference He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778 He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778
18.
go back to reference Hinton GE, Osindero S, Teh YW (2006) A fast learning algorithm for deep belief nets. Neural Comput 18(7):1527–1554MathSciNetCrossRef Hinton GE, Osindero S, Teh YW (2006) A fast learning algorithm for deep belief nets. Neural Comput 18(7):1527–1554MathSciNetCrossRef
19.
go back to reference Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70(1–3):489–501CrossRef Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70(1–3):489–501CrossRef
20.
go back to reference Hutchinson B, Deng L, Yu D (2013) Tensor deep stacking networks. IEEE Trans Patt Anal Mach Intell 35(8):1944–1957CrossRef Hutchinson B, Deng L, Yu D (2013) Tensor deep stacking networks. IEEE Trans Patt Anal Mach Intell 35(8):1944–1957CrossRef
21.
go back to reference Jaderberg M, Czarnecki W, Osindero S, Vinyals O, Graves A, Silver D, Kavukcuoglu K (2016) Decoupled neural interfaces using synthetic gradients. In: International conference on machine learning Jaderberg M, Czarnecki W, Osindero S, Vinyals O, Graves A, Silver D, Kavukcuoglu K (2016) Decoupled neural interfaces using synthetic gradients. In: International conference on machine learning
22.
go back to reference Jang S, Tan G, Toh K, Teoh ABJ (2020) Online heterogeneous face recognition based on total-error-rate minimization. IEEE Trans Syst Man Cybern Syst 50(4):1286–1299CrossRef Jang S, Tan G, Toh K, Teoh ABJ (2020) Online heterogeneous face recognition based on total-error-rate minimization. IEEE Trans Syst Man Cybern Syst 50(4):1286–1299CrossRef
23.
go back to reference Kuo CCJ, Zhang M, Li S, Duan J, Chen Y (2019) Interpretable convolutional neural networks via feedforward design. J Vis Commun Image Repres 60:346–359CrossRef Kuo CCJ, Zhang M, Li S, Duan J, Chen Y (2019) Interpretable convolutional neural networks via feedforward design. J Vis Commun Image Repres 60:346–359CrossRef
24.
go back to reference LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324CrossRef LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324CrossRef
25.
go back to reference Li L, Zhao K, Sun R, Gan J, Yuan G, Liu T (2020) Parameter-free extreme learning machine for imbalanced classification. Neural Process Lett 1–18 Li L, Zhao K, Sun R, Gan J, Yuan G, Liu T (2020) Parameter-free extreme learning machine for imbalanced classification. Neural Process Lett 1–18
26.
go back to reference Liu P, Huang Y, Meng L, Gong S, Zhang G (2016) Two-stage extreme learning machine for high-dimensional data. Int J Mach Learn Cybern 7(5):765–772CrossRef Liu P, Huang Y, Meng L, Gong S, Zhang G (2016) Two-stage extreme learning machine for high-dimensional data. Int J Mach Learn Cybern 7(5):765–772CrossRef
27.
go back to reference Low C, Park J, Teoh A.B (2019) Stacking-based deep neural network: deep analytic network for pattern classification. IEEE Trans Cybern 1–14 Low C, Park J, Teoh A.B (2019) Stacking-based deep neural network: deep analytic network for pattern classification. IEEE Trans Cybern 1–14
28.
go back to reference Martínez-Rego D, Fontenla-Romero O, Alonso-Betanzos A (2012) Nonlinear single layer neural network training algorithm for incremental, nonstationary and distributed learning scenarios. Patt Recogn 45(12):4536–4546CrossRef Martínez-Rego D, Fontenla-Romero O, Alonso-Betanzos A (2012) Nonlinear single layer neural network training algorithm for incremental, nonstationary and distributed learning scenarios. Patt Recogn 45(12):4536–4546CrossRef
29.
go back to reference Mesquita DP, Gomes JPP, Rodrigues LR (2019) Artificial neural networks with random weights for incomplete datasets. Neural Process Lett 50(3):2345–2372CrossRef Mesquita DP, Gomes JPP, Rodrigues LR (2019) Artificial neural networks with random weights for incomplete datasets. Neural Process Lett 50(3):2345–2372CrossRef
30.
go back to reference Mostafa H, Ramesh V, Cauwenberghs G (2018) Deep supervised learning using local errors. Front Neurosci 12:608CrossRef Mostafa H, Ramesh V, Cauwenberghs G (2018) Deep supervised learning using local errors. Front Neurosci 12:608CrossRef
31.
go back to reference Nair V, Hinton GE (2010) Rectified linear units improve restricted boltzmann machines. In: Proceedings of the 27th international conference on machine learning (ICML-10), pp 807–814 Nair V, Hinton GE (2010) Rectified linear units improve restricted boltzmann machines. In: Proceedings of the 27th international conference on machine learning (ICML-10), pp 807–814
32.
go back to reference Nielsen MA (2015) Neural networks and deep learning, vol 25. Determination press San Francisco, CA Nielsen MA (2015) Neural networks and deep learning, vol 25. Determination press San Francisco, CA
33.
go back to reference Nøkland A, Eidnes LH (2019) Training neural networks with local error signals. In: International conference on machine learning, pp 4839–4850 Nøkland A, Eidnes LH (2019) Training neural networks with local error signals. In: International conference on machine learning, pp 4839–4850
34.
go back to reference Pan SJ, Yang Q (2009) A survey on transfer learning. IEEE Trans Knowled Data Eng 22(10):1345–1359CrossRef Pan SJ, Yang Q (2009) A survey on transfer learning. IEEE Trans Knowled Data Eng 22(10):1345–1359CrossRef
35.
go back to reference Park J, Sandberg IW (1991) Universal approximation using radial-basis-function networks. Neural Comput 3:246–257CrossRef Park J, Sandberg IW (1991) Universal approximation using radial-basis-function networks. Neural Comput 3:246–257CrossRef
36.
go back to reference Potdar K, Pardawala TS, Pai CD (2017) A comparative study of categorical variable encoding techniques for neural network classifiers. Int J Comput Appl 175(4):7–9 Potdar K, Pardawala TS, Pai CD (2017) A comparative study of categorical variable encoding techniques for neural network classifiers. Int J Comput Appl 175(4):7–9
37.
go back to reference Schmidt W, Kraaijveld M, Duin R (1992) Feedforward neural networks with random weights. In: Proceedings of 11th IAPR international conference on pattern recognition, vol II. Conference B: pattern recognition methodology and systems. IEEE, pp 1–4 Schmidt W, Kraaijveld M, Duin R (1992) Feedforward neural networks with random weights. In: Proceedings of 11th IAPR international conference on pattern recognition, vol II. Conference B: pattern recognition methodology and systems. IEEE, pp 1–4
38.
go back to reference Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition
39.
40.
go back to reference Stone M (1974) Cross-validatory choice and assessment of statistical predictions. J Roy Stat Soc Ser B (Methodological) 36(2):111–147 Stone M (1974) Cross-validatory choice and assessment of statistical predictions. J Roy Stat Soc Ser B (Methodological) 36(2):111–147
43.
go back to reference Toh KA (2018) Kernel and range approach to analytic network learning. Int J Netw Distrib Comput 7(1):20–28 Toh KA (2018) Kernel and range approach to analytic network learning. Int J Netw Distrib Comput 7(1):20–28
44.
go back to reference Toh K.A (2018) Learning from the kernel and the range space. In: The proceedings of the 17th 2018 IEEE conference on computer and information science (ICIS). IEEE, pp 417–422 Toh K.A (2018) Learning from the kernel and the range space. In: The proceedings of the 17th 2018 IEEE conference on computer and information science (ICIS). IEEE, pp 417–422
45.
go back to reference Toh KA, Lin Z, Li Z, Oh B, Sun L (2018) Gradient-free learning based on the kernel and the range space. arXiv preprint arXiv:1810.11581 Toh KA, Lin Z, Li Z, Oh B, Sun L (2018) Gradient-free learning based on the kernel and the range space. arXiv preprint arXiv:​1810.​11581
46.
go back to reference Toh KA, Lin Z, Sun L, Li Z (2018) Stretchy binary classification. Neural Netw 97:74–91CrossRef Toh KA, Lin Z, Sun L, Li Z (2018) Stretchy binary classification. Neural Netw 97:74–91CrossRef
47.
go back to reference Wang R, Kwong S, Wang X (2012) A study on random weights between input and hidden layers in extreme learning machine. Soft Comput 16(9):1465–1475CrossRef Wang R, Kwong S, Wang X (2012) A study on random weights between input and hidden layers in extreme learning machine. Soft Comput 16(9):1465–1475CrossRef
48.
go back to reference Wang X, Zhang T, Wang R (2019) Noniterative deep learning: incorporating restricted boltzmann machine into multilayer random weight neural networks. IEEE Trans Syst Man Cybern Syst 49(7):1299–1308CrossRef Wang X, Zhang T, Wang R (2019) Noniterative deep learning: incorporating restricted boltzmann machine into multilayer random weight neural networks. IEEE Trans Syst Man Cybern Syst 49(7):1299–1308CrossRef
49.
go back to reference Werbos P (1974) Beyond regression: new tools for prediction and analysis in the behavioral sciences. PhD dissertation, Harvard University Werbos P (1974) Beyond regression: new tools for prediction and analysis in the behavioral sciences. PhD dissertation, Harvard University
50.
go back to reference Wu J, Qiu S, Zeng R, Kong Y, Senhadji L, Shu H (2017) Multilinear principal component analysis network for tensor object classification. IEEE Access 5:3322–3331CrossRef Wu J, Qiu S, Zeng R, Kong Y, Senhadji L, Shu H (2017) Multilinear principal component analysis network for tensor object classification. IEEE Access 5:3322–3331CrossRef
52.
go back to reference Xu Y, Zhang J, Long Z, Lv M (2019) Daily urban water demand forecasting based on chaotic theory and continuous deep belief neural network. Neural Process Lett 50(2):1173–1189CrossRef Xu Y, Zhang J, Long Z, Lv M (2019) Daily urban water demand forecasting based on chaotic theory and continuous deep belief neural network. Neural Process Lett 50(2):1173–1189CrossRef
53.
go back to reference Zhuang H, Lin Z, Toh KA (2020) Training a multilayer network with low-memory kernel-and-range projection. J Franklin Inst 357(1):522–550MathSciNetCrossRef Zhuang H, Lin Z, Toh KA (2020) Training a multilayer network with low-memory kernel-and-range projection. J Franklin Inst 357(1):522–550MathSciNetCrossRef
Metadata
Title
Correlation Projection for Analytic Learning of a Classification Network
Authors
Huiping Zhuang
Zhiping Lin
Kar-Ann Toh
Publication date
12-07-2021
Publisher
Springer US
Published in
Neural Processing Letters / Issue 6/2021
Print ISSN: 1370-4621
Electronic ISSN: 1573-773X
DOI
https://doi.org/10.1007/s11063-021-10570-2

Other articles of this Issue 6/2021

Neural Processing Letters 6/2021 Go to the issue