Skip to main content
Erschienen in: Neural Computing and Applications 9/2018

20.02.2017 | ICONIP 2015

An analog neural network approach for the least absolute shrinkage and selection operator problem

verfasst von: Hao Wang, Ching Man Lee, Ruibin Feng, Chi Sing Leung

Erschienen in: Neural Computing and Applications | Ausgabe 9/2018

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This paper addresses the analog optimization for non-differential functions. The Lagrange programming neural network (LPNN) approach provides us a systematic way to build analog neural networks for handling constrained optimization problems. However, its drawback is that it cannot handle non-differentiable functions. In compressive sampling, one of the optimization problems is least absolute shrinkage and selection operator (LASSO), where the constraint is non-differentiable. This paper considers the hidden state concept from the local competition algorithm to formulate an analog model for the LASSO problem. Hence, the non-differentiable limitation of LPNN can be overcome. Under some conditions, at equilibrium, the network leads to the optimal solution of the LASSO. Also, we prove that these equilibrium points are stable. Simulation study illustrates that the proposed analog model and the traditional digital method have the similar mean squared performance.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Cichocki A, Unbehauen R (1993) Neural networks for optimization and signal processing. Wiley, LondonMATH Cichocki A, Unbehauen R (1993) Neural networks for optimization and signal processing. Wiley, LondonMATH
2.
Zurück zum Zitat MacIntyre J (2013) Applications of neural computing in the twenty-first century and 21 years of Neural Computing & Applications. Neural Computing Appl 23(3):657–665CrossRef MacIntyre J (2013) Applications of neural computing in the twenty-first century and 21 years of Neural Computing & Applications. Neural Computing Appl 23(3):657–665CrossRef
3.
Zurück zum Zitat Hopfield JJ (1982) Neural networks and physical systems with emergent collective computational abilities. In: Proceedings of the National Academy of Sciences, 79, 2554–2558 Hopfield JJ (1982) Neural networks and physical systems with emergent collective computational abilities. In: Proceedings of the National Academy of Sciences, 79, 2554–2558
4.
Zurück zum Zitat Tank D, Hopfield JJ (1986) Simple neural optimization networks: an A/D converter, signal decision circuit, and a linear programming circuit. IEEE Trans Circuits Syst 33(5):533–541CrossRef Tank D, Hopfield JJ (1986) Simple neural optimization networks: an A/D converter, signal decision circuit, and a linear programming circuit. IEEE Trans Circuits Syst 33(5):533–541CrossRef
5.
Zurück zum Zitat Duan S, Dong Z, Hu X, Wang L, Li H (2016) Small-world Hopfield neural networks with weight salience priority and memristor synapses for digit recognition. Neural Computing Appl 27(4):837–844CrossRef Duan S, Dong Z, Hu X, Wang L, Li H (2016) Small-world Hopfield neural networks with weight salience priority and memristor synapses for digit recognition. Neural Computing Appl 27(4):837–844CrossRef
6.
7.
Zurück zum Zitat Liu Q, Wang J (2008) A one-layer recurrent neural network with a discontinuous hard-limiting activation function for quadratic programming. IEEE Trans Neural Netw 19(4):558–570CrossRef Liu Q, Wang J (2008) A one-layer recurrent neural network with a discontinuous hard-limiting activation function for quadratic programming. IEEE Trans Neural Netw 19(4):558–570CrossRef
8.
Zurück zum Zitat Wang J (2010) Analysis and design of a k-winners-take-all model with a single state variable and the heaviside step activation function. IEEE Trans Neural Netw 21(9):1496–1506CrossRef Wang J (2010) Analysis and design of a k-winners-take-all model with a single state variable and the heaviside step activation function. IEEE Trans Neural Netw 21(9):1496–1506CrossRef
9.
Zurück zum Zitat Bharitkar S, Tsuchiya K, Takefuji Y (1999) Microcode optimization with neural networks. IEEE Trans Neural Netw 10(3):698–703CrossRef Bharitkar S, Tsuchiya K, Takefuji Y (1999) Microcode optimization with neural networks. IEEE Trans Neural Netw 10(3):698–703CrossRef
11.
Zurück zum Zitat Ho TY, Lam PM, Leung CS (2008) Parallelization of cellular neural networks on GPU. Pattern Recognit 41(8):2684–2692CrossRefMATH Ho TY, Lam PM, Leung CS (2008) Parallelization of cellular neural networks on GPU. Pattern Recognit 41(8):2684–2692CrossRefMATH
12.
Zurück zum Zitat Lin YL, Hsieh JG, Kuo YS, Jeng JH (2016) NXOR- or XOR-based robust template decomposition for cellular neural networks implementing an arbitrary Boolean function via support vector classifiers. Neural Computing Appl (accepted) Lin YL, Hsieh JG, Kuo YS, Jeng JH (2016) NXOR- or XOR-based robust template decomposition for cellular neural networks implementing an arbitrary Boolean function via support vector classifiers. Neural Computing Appl (accepted)
13.
Zurück zum Zitat Liu X (2016) Improved convergence criteria for HCNNs with delays and oscillating coefficients in leakage terms. Neural Computing Appl 27(4):917–925CrossRef Liu X (2016) Improved convergence criteria for HCNNs with delays and oscillating coefficients in leakage terms. Neural Computing Appl 27(4):917–925CrossRef
14.
Zurück zum Zitat Sum J, Leung CS, Tam P, Young G, Kan WK, Chan LW (1999) Analysis for a class of winner-take-all model. IEEE Trans Neural Netw 10(1):64–71CrossRef Sum J, Leung CS, Tam P, Young G, Kan WK, Chan LW (1999) Analysis for a class of winner-take-all model. IEEE Trans Neural Netw 10(1):64–71CrossRef
15.
Zurück zum Zitat Liu S, Wang J (2006) A simplified dual neural network for quadratic programming with its KWTA application. IEEE Trans Neural Netw 17(6):1500–1510CrossRef Liu S, Wang J (2006) A simplified dual neural network for quadratic programming with its KWTA application. IEEE Trans Neural Netw 17(6):1500–1510CrossRef
16.
Zurück zum Zitat Xiao Y, Liu Y, Leung CS, Sum J, Ho K (2012) Analysis on the convergence time of dual neural network-based kwta. IEEE Trans Neural Netw Learn Syst 23(4):676–682CrossRef Xiao Y, Liu Y, Leung CS, Sum J, Ho K (2012) Analysis on the convergence time of dual neural network-based kwta. IEEE Trans Neural Netw Learn Syst 23(4):676–682CrossRef
17.
Zurück zum Zitat Gao XB (2003) Exponential stability of globally projected dynamics systems. IEEE Trans Neural Netw 14:426–431CrossRef Gao XB (2003) Exponential stability of globally projected dynamics systems. IEEE Trans Neural Netw 14:426–431CrossRef
18.
Zurück zum Zitat Hu X, Wang J (2007) A recurrent neural network for solving a class of general variational inequalities. IEEE Trans Syst Man Cybern B Cybern 37(3):528–539CrossRef Hu X, Wang J (2007) A recurrent neural network for solving a class of general variational inequalities. IEEE Trans Syst Man Cybern B Cybern 37(3):528–539CrossRef
19.
Zurück zum Zitat Zhang S, Constantinidies AG (1992) Lagrange programming neural networks. IEEE Tran Circuits Syst II 39:441–452CrossRef Zhang S, Constantinidies AG (1992) Lagrange programming neural networks. IEEE Tran Circuits Syst II 39:441–452CrossRef
20.
Zurück zum Zitat Leung CS, Sum J, So HC, Constantinides AG, Chan FKW (2014) Lagrange programming neural networks for time-of-arrival-based source localization. Neural Computing Appl 24(1):109–116CrossRef Leung CS, Sum J, So HC, Constantinides AG, Chan FKW (2014) Lagrange programming neural networks for time-of-arrival-based source localization. Neural Computing Appl 24(1):109–116CrossRef
21.
Zurück zum Zitat Liang J, So HC, Leung CS, Li J, Farina A (2015) Waveform design with unit modulus and spectral shape constraints via Lagrange programming neural network. IEEE J Sel Top Signal Process 9(8):1377–1386CrossRef Liang J, So HC, Leung CS, Li J, Farina A (2015) Waveform design with unit modulus and spectral shape constraints via Lagrange programming neural network. IEEE J Sel Top Signal Process 9(8):1377–1386CrossRef
22.
Zurück zum Zitat Liang J, Leung CS, So HC (2016) Lagrange programming neural network approach for target localization in distributed MIMO radar. IEEE Trans Signal Process 64(6):1574–1585MathSciNetCrossRef Liang J, Leung CS, So HC (2016) Lagrange programming neural network approach for target localization in distributed MIMO radar. IEEE Trans Signal Process 64(6):1574–1585MathSciNetCrossRef
23.
Zurück zum Zitat Donoho DL, Elad M (2003) Optimally sparse representation in general (nonorthogonal) dictionaries via \(l_1\) minimization. Proc Natl Acad Sci 100(5):2197–2202MathSciNetCrossRefMATH Donoho DL, Elad M (2003) Optimally sparse representation in general (nonorthogonal) dictionaries via \(l_1\) minimization. Proc Natl Acad Sci 100(5):2197–2202MathSciNetCrossRefMATH
24.
Zurück zum Zitat Gilbert AC, Tropp JA (2005) Applications of sparse approximation in communications. In: Proceedings of the international symposium on information theory ISIT 2005:1000–1004 Gilbert AC, Tropp JA (2005) Applications of sparse approximation in communications. In: Proceedings of the international symposium on information theory ISIT 2005:1000–1004
25.
Zurück zum Zitat Sahoo SK, Lu W(2011) Image denoising using sparse approximation with adaptive window selection. In: Proceedings of the 8th international conference on information, communications and signal processing (ICICS) 2011, 1–5 Sahoo SK, Lu W(2011) Image denoising using sparse approximation with adaptive window selection. In: Proceedings of the 8th international conference on information, communications and signal processing (ICICS) 2011, 1–5
26.
Zurück zum Zitat Rahmoune A, Vandergheynst P, Frossard P (2012) Sparse approximation using m-term pursuit and application in image and video coding. IEEE Trans Image Process 21(4):1950–1962MathSciNetCrossRefMATH Rahmoune A, Vandergheynst P, Frossard P (2012) Sparse approximation using m-term pursuit and application in image and video coding. IEEE Trans Image Process 21(4):1950–1962MathSciNetCrossRefMATH
27.
Zurück zum Zitat Kim SJ, Koh K, Lustig M, Boyd S, Gorinevsky D (2007) An interior-point method for large-scale ‘1-regularized least squares. IEEE J Sel Top Sig Proc 1(4):606–617CrossRef Kim SJ, Koh K, Lustig M, Boyd S, Gorinevsky D (2007) An interior-point method for large-scale ‘1-regularized least squares. IEEE J Sel Top Sig Proc 1(4):606–617CrossRef
29.
Zurück zum Zitat Figueiredo M, Nowak R, Wright S (2007) Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems, IEEE. J Sel Top Sig Proc 1(4):586–597CrossRef Figueiredo M, Nowak R, Wright S (2007) Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems, IEEE. J Sel Top Sig Proc 1(4):586–597CrossRef
30.
Zurück zum Zitat Berg E, Friedlander MP (2008) Probing the pareto frontier for basis pursuit solutions. SIAM J Sci Computing 31(2):890912MathSciNetMATH Berg E, Friedlander MP (2008) Probing the pareto frontier for basis pursuit solutions. SIAM J Sci Computing 31(2):890912MathSciNetMATH
33.
Zurück zum Zitat Rozell CJ, Johnson DH, Baraniuk RG, Olshausen BA (2008) Sparse coding via thresholding and local competition in neural circuits. Neural Comput 20(10):2526–2563MathSciNetCrossRef Rozell CJ, Johnson DH, Baraniuk RG, Olshausen BA (2008) Sparse coding via thresholding and local competition in neural circuits. Neural Comput 20(10):2526–2563MathSciNetCrossRef
35.
Zurück zum Zitat Feng R, Lee CM, Leung CS (2015) Lagrange programming neural network for the L1-norm constrained quadratic minimization. In: Proceedings of the ICONIP 2015, Istanbul, Turkey, 3, pp 119–126 Feng R, Lee CM, Leung CS (2015) Lagrange programming neural network for the L1-norm constrained quadratic minimization. In: Proceedings of the ICONIP 2015, Istanbul, Turkey, 3, pp 119–126
36.
Zurück zum Zitat Balavoine A, Rozell CJ, Romberg J (2011) Global convergence of the locally competitive algorithm. In: Proceedings of the IEEE signal processing education workshop (DSP/SPE) (2011) Sedona. Arizona, USA, pp 431–436 Balavoine A, Rozell CJ, Romberg J (2011) Global convergence of the locally competitive algorithm. In: Proceedings of the IEEE signal processing education workshop (DSP/SPE) (2011) Sedona. Arizona, USA, pp 431–436
37.
Zurück zum Zitat Balavoine A, Romberg J, Rozell CJ (2012) Convergence and rate analysis of neural networks for sparse approximation. IEEE Trans Neural Netw Learn Syst 23(9):1377–1389CrossRef Balavoine A, Romberg J, Rozell CJ (2012) Convergence and rate analysis of neural networks for sparse approximation. IEEE Trans Neural Netw Learn Syst 23(9):1377–1389CrossRef
38.
Zurück zum Zitat Gordon G, Tibshirani R (2012) Karush–Kuhn–Tucker conditions, Optimization Fall 2012 Lecture Notes Gordon G, Tibshirani R (2012) Karush–Kuhn–Tucker conditions, Optimization Fall 2012 Lecture Notes
39.
Zurück zum Zitat Guenin B, Konemann J, Tunel T (2014) A gentle introduction to optimization. Cambridge University Press, CambridgeMATH Guenin B, Konemann J, Tunel T (2014) A gentle introduction to optimization. Cambridge University Press, CambridgeMATH
40.
Metadaten
Titel
An analog neural network approach for the least absolute shrinkage and selection operator problem
verfasst von
Hao Wang
Ching Man Lee
Ruibin Feng
Chi Sing Leung
Publikationsdatum
20.02.2017
Verlag
Springer London
Erschienen in
Neural Computing and Applications / Ausgabe 9/2018
Print ISSN: 0941-0643
Elektronische ISSN: 1433-3058
DOI
https://doi.org/10.1007/s00521-017-2863-5

Weitere Artikel der Ausgabe 9/2018

Neural Computing and Applications 9/2018 Zur Ausgabe

Premium Partner