Skip to main content
Erschienen in: Neural Computing and Applications 7/2019

06.11.2017 | Original Article

KKT condition-based smoothing recurrent neural network for nonsmooth nonconvex optimization in compressed sensing

verfasst von: Dan Wang, Zhuhong Zhang

Erschienen in: Neural Computing and Applications | Ausgabe 7/2019

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This work probes into a smoothing recurrent neural network (SRNN) in terms of a smoothing approximation technique and the equivalent version of the Karush–Kuhn–Tucker condition. Such a network is developed to handle the \(L_0\hbox {-norm}\) minimization model originated from compressed sensing, after replacing the model with a nonconvex nonsmooth approximation one. The existence, uniqueness and limit behavior of solutions of the network are well studied by means of some mathematical tools. Multiple kinds of nonconvex approximation functions are examined so as to decide which of them is most suitable for SRNN to address the problem of sparse signal recovery under different kinds of sensing matrices. Comparative experiments have validated that among the chosen approximation functions, transformed L1 function (TL1), logarithm function (Log) and arctangent penalty function are effective for sparse recovery; SRNN-TL1 is robust and insensitive to the coherence of sensing matrix, while it is competitive by comparison against several existing discrete numerical algorithms and neural network methods for compressed sensing problems.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Literatur
2.
Zurück zum Zitat Candes EJ, Romberg J, Tao T (2006) Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inf Theory 52(2):489–509MathSciNetMATHCrossRef Candes EJ, Romberg J, Tao T (2006) Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inf Theory 52(2):489–509MathSciNetMATHCrossRef
5.
Zurück zum Zitat Gasso G, Rakotomamonjy A, Canu S (2009) Recovering sparse signals with a certain family of nonconvex penalties and DC programming. IEEE Trans Signal Process 57(12):4686–4698MathSciNetMATHCrossRef Gasso G, Rakotomamonjy A, Canu S (2009) Recovering sparse signals with a certain family of nonconvex penalties and DC programming. IEEE Trans Signal Process 57(12):4686–4698MathSciNetMATHCrossRef
6.
Zurück zum Zitat Foucart S, Lai MJ (2009) Sparsest solutions of underdetermined linear systems via \(l_q\)-minimization for \(0<q\le 1\). Appl Comput Harmon Anal 26(3):395–407MathSciNetMATHCrossRef Foucart S, Lai MJ (2009) Sparsest solutions of underdetermined linear systems via \(l_q\)-minimization for \(0<q\le 1\). Appl Comput Harmon Anal 26(3):395–407MathSciNetMATHCrossRef
7.
Zurück zum Zitat Lai MJ, Xu Y, Yin W (2013) Improved iteratively reweighted least squares for unconstrained smoothed \(l_q\) minimization. SIAM J Numer Anal 51(2):927–957MathSciNetMATHCrossRef Lai MJ, Xu Y, Yin W (2013) Improved iteratively reweighted least squares for unconstrained smoothed \(l_q\) minimization. SIAM J Numer Anal 51(2):927–957MathSciNetMATHCrossRef
8.
Zurück zum Zitat Geman D, Yang C (1995) Nonlinear image recovery with half-quadratic regularization. IEEE Trans Image Process 4(7):932–946CrossRef Geman D, Yang C (1995) Nonlinear image recovery with half-quadratic regularization. IEEE Trans Image Process 4(7):932–946CrossRef
9.
Zurück zum Zitat Trzasko J, Manduca A (2009) Relaxed conditions for sparse signal recovery with general concave priors. IEEE Trans Signal Process 57(11):4347–4354MathSciNetMATHCrossRef Trzasko J, Manduca A (2009) Relaxed conditions for sparse signal recovery with general concave priors. IEEE Trans Signal Process 57(11):4347–4354MathSciNetMATHCrossRef
10.
Zurück zum Zitat Fan J, Li R (2001) Variable selection via nonconcave penalized likelihood and its oracle properties. J Amer Stat Assoc 96(456):1348–1360MathSciNetMATHCrossRef Fan J, Li R (2001) Variable selection via nonconcave penalized likelihood and its oracle properties. J Amer Stat Assoc 96(456):1348–1360MathSciNetMATHCrossRef
11.
Zurück zum Zitat Friedman JH (2012) Fast sparse regression and classification. Int J Forecast 28(3):722–738CrossRef Friedman JH (2012) Fast sparse regression and classification. Int J Forecast 28(3):722–738CrossRef
13.
Zurück zum Zitat Zhang S, Qian H, Chen W, Zhang Z (2013) A concave conjugate approach for nonconvex penalized regression with the MCP penalty. In: Proceedings of the 27th AAAI conference on artificial intelligence 2013, pp 1027–1033 Zhang S, Qian H, Chen W, Zhang Z (2013) A concave conjugate approach for nonconvex penalized regression with the MCP penalty. In: Proceedings of the 27th AAAI conference on artificial intelligence 2013, pp 1027–1033
14.
Zurück zum Zitat Zhang T (2010) Analysis of multi-stage convex relaxation for sparse regularization. J Mach Learn Res 11(2):1081–1107MathSciNetMATH Zhang T (2010) Analysis of multi-stage convex relaxation for sparse regularization. J Mach Learn Res 11(2):1081–1107MathSciNetMATH
15.
Zurück zum Zitat Gao C, Wang N, Yu Q, Zhang Z (2011) A feasible nonconvex relaxation approach to feature selection. In: Proceedings of the 25th AAAI conference on artificial intelligence 2011, pp 356–361 Gao C, Wang N, Yu Q, Zhang Z (2011) A feasible nonconvex relaxation approach to feature selection. In: Proceedings of the 25th AAAI conference on artificial intelligence 2011, pp 356–361
16.
Zurück zum Zitat Soubies E, Blanc-Fraud L, Aubert G (2015) A continuous exact \(L_0\) penalty (CEL0) for least squares regularized problem. SIAM J Imaging Sci 8(3):1607–1639MathSciNetMATHCrossRef Soubies E, Blanc-Fraud L, Aubert G (2015) A continuous exact \(L_0\) penalty (CEL0) for least squares regularized problem. SIAM J Imaging Sci 8(3):1607–1639MathSciNetMATHCrossRef
17.
Zurück zum Zitat Malek-Mohammadi M, Koochakzadeh A, Babaie-Zadeh M, Jansson M, Rojas CR (2016) Successive concave sparsity approximation for compressed sensing. IEEE Trans Signal Process 64(21):5657–5671MathSciNetMATHCrossRef Malek-Mohammadi M, Koochakzadeh A, Babaie-Zadeh M, Jansson M, Rojas CR (2016) Successive concave sparsity approximation for compressed sensing. IEEE Trans Signal Process 64(21):5657–5671MathSciNetMATHCrossRef
18.
Zurück zum Zitat Selesnick IW, Bayram I (2014) Sparse signal estimation by maximally sparse convex optimization. IEEE Trans Signal Process 62(5):1078–1092MathSciNetMATHCrossRef Selesnick IW, Bayram I (2014) Sparse signal estimation by maximally sparse convex optimization. IEEE Trans Signal Process 62(5):1078–1092MathSciNetMATHCrossRef
19.
Zurück zum Zitat Lou Y, Osher S, Xin J (2015) Computational aspects of constrained \(L_1-L_2\) minimization for compressive sensing. Modelling. Springer, Computation and optimization in information systems and management sciences, pp 169–180 Lou Y, Osher S, Xin J (2015) Computational aspects of constrained \(L_1-L_2\) minimization for compressive sensing. Modelling. Springer, Computation and optimization in information systems and management sciences, pp 169–180
20.
Zurück zum Zitat Yin P, Lou Y, He Q, Xin J (2015) Minimization of \(l_{1-2}\) for compressed sensing. SIAM J Sci Comput 37(1):A536–A563MATHCrossRef Yin P, Lou Y, He Q, Xin J (2015) Minimization of \(l_{1-2}\) for compressed sensing. SIAM J Sci Comput 37(1):A536–A563MATHCrossRef
21.
Zurück zum Zitat Zhang H, Li J, Ji Y, Yue H (2017) Understanding subtitles by character-level sequence-to-sequence learning. IEEE Trans Ind Inform 13(2):616–624CrossRef Zhang H, Li J, Ji Y, Yue H (2017) Understanding subtitles by character-level sequence-to-sequence learning. IEEE Trans Ind Inform 13(2):616–624CrossRef
22.
Zurück zum Zitat Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117CrossRef Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117CrossRef
23.
Zurück zum Zitat Bian W, Chen X (2014) Neural network for nonsmooth, nonconvex constrained minimization via smooth approximation. IEEE Trans Neural Netw Learn Syst 25(3):545–556CrossRef Bian W, Chen X (2014) Neural network for nonsmooth, nonconvex constrained minimization via smooth approximation. IEEE Trans Neural Netw Learn Syst 25(3):545–556CrossRef
24.
Zurück zum Zitat Qin S, Bian W, Xue X (2013) A new one-layer recurrent neural network for nonsmooth pseudoconvex optimization. Neurocomputing 120:655–662CrossRef Qin S, Bian W, Xue X (2013) A new one-layer recurrent neural network for nonsmooth pseudoconvex optimization. Neurocomputing 120:655–662CrossRef
25.
Zurück zum Zitat Hopfield JJ, Tank DW (1985) Neural computation of decisions in optimization problems. Biol Cybern 52(3):141–152MATH Hopfield JJ, Tank DW (1985) Neural computation of decisions in optimization problems. Biol Cybern 52(3):141–152MATH
26.
Zurück zum Zitat Bian W, Chen X (2012) Smoothing neural network for constrained non-Lipschitz optimization with applications. IEEE Trans Neural Netw Learn Syst 23(3):399–411MathSciNetCrossRef Bian W, Chen X (2012) Smoothing neural network for constrained non-Lipschitz optimization with applications. IEEE Trans Neural Netw Learn Syst 23(3):399–411MathSciNetCrossRef
27.
Zurück zum Zitat Rozell CJ, Garrigues P (2010) Analog sparse approximation for compressed sensing recovery. In: Proceedings of the ASILOMAR conference on signals, systems and computers 2010, pp 822–826 Rozell CJ, Garrigues P (2010) Analog sparse approximation for compressed sensing recovery. In: Proceedings of the ASILOMAR conference on signals, systems and computers 2010, pp 822–826
28.
Zurück zum Zitat Charles AS, Garrigues P, Rozell CJ (2011) Analog sparse approximation with applications to compressed sensing. arXiv preprint arXiv:1111.4118 Charles AS, Garrigues P, Rozell CJ (2011) Analog sparse approximation with applications to compressed sensing. arXiv preprint arXiv:​1111.​4118
29.
Zurück zum Zitat Leung CS, Sum J, Constantinides AG (2014) Recurrent networks for compressive sampling. Neurocomputing 129:298–305CrossRef Leung CS, Sum J, Constantinides AG (2014) Recurrent networks for compressive sampling. Neurocomputing 129:298–305CrossRef
30.
Zurück zum Zitat Feng R, Leung CS, Constantinides AG, Zeng WJ (2017) Lagrange programming neural network for nondifferentiable optimization problems in sparse approximation. IEEE Trans Neural Netw Learn Syst 28(10):2395–2407MathSciNetCrossRef Feng R, Leung CS, Constantinides AG, Zeng WJ (2017) Lagrange programming neural network for nondifferentiable optimization problems in sparse approximation. IEEE Trans Neural Netw Learn Syst 28(10):2395–2407MathSciNetCrossRef
32.
Zurück zum Zitat Liu Y, Hu J (2016) A neural network for \(l_1\)-\(l_2\) minimization based on scaled gradient projection: application to compressed sensing. Neurocomputing 173(3):988–993CrossRef Liu Y, Hu J (2016) A neural network for \(l_1\)-\(l_2\) minimization based on scaled gradient projection: application to compressed sensing. Neurocomputing 173(3):988–993CrossRef
33.
Zurück zum Zitat Liu Q, Wang J (2016) \(L_1\)-minimization algorithms for sparse signal reconstruction based on a projection neural network. IEEE Trans Neural Netw Learn Syst 27(3):698–707MathSciNetCrossRef Liu Q, Wang J (2016) \(L_1\)-minimization algorithms for sparse signal reconstruction based on a projection neural network. IEEE Trans Neural Netw Learn Syst 27(3):698–707MathSciNetCrossRef
34.
Zurück zum Zitat Guo Z, Wang J (2010) A neurodynamic optimization approach to constrained sparsity maximization based on alternative objective functions. In: Proceedings of the 2010 international joint conference on neural networks (IJCNN) 2010, pp 18–23 Guo Z, Wang J (2010) A neurodynamic optimization approach to constrained sparsity maximization based on alternative objective functions. In: Proceedings of the 2010 international joint conference on neural networks (IJCNN) 2010, pp 18–23
35.
Zurück zum Zitat Guo C, Yang Q (2015) A neurodynamic optimization method for recovery of compressive sensed signals with globally converged solution approximating to minimization \(L_0\). IEEE Trans Neural Netw Learn Syst 26(7):1363–1374MathSciNetCrossRef Guo C, Yang Q (2015) A neurodynamic optimization method for recovery of compressive sensed signals with globally converged solution approximating to minimization \(L_0\). IEEE Trans Neural Netw Learn Syst 26(7):1363–1374MathSciNetCrossRef
36.
Zurück zum Zitat Bazaraa MS, Sherali HD, Shetty CM (2013) Nonlinear programming: theory and algorithms. Wiley, New YorkMATH Bazaraa MS, Sherali HD, Shetty CM (2013) Nonlinear programming: theory and algorithms. Wiley, New YorkMATH
38.
Zurück zum Zitat Tropp JA, Gilbert AC (2007) Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans Inf Theory 53(12):4655–4666MathSciNetMATHCrossRef Tropp JA, Gilbert AC (2007) Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans Inf Theory 53(12):4655–4666MathSciNetMATHCrossRef
39.
Zurück zum Zitat Needell D, Tropp JA (2009) CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl Comput Harmon Anal 26(3):301–321MathSciNetMATHCrossRef Needell D, Tropp JA (2009) CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl Comput Harmon Anal 26(3):301–321MathSciNetMATHCrossRef
40.
41.
42.
Zurück zum Zitat Boyd S, Parikh N, Chu E, Peleato B, Eckstein J (2011) Distributed optimization and statistical learning via the alternating direction method of multipliers. Found Trends Mach Learn 3(1):1–122MATHCrossRef Boyd S, Parikh N, Chu E, Peleato B, Eckstein J (2011) Distributed optimization and statistical learning via the alternating direction method of multipliers. Found Trends Mach Learn 3(1):1–122MATHCrossRef
43.
Zurück zum Zitat Hale E, Yin W, Zhang Y (2008) Fixed-point continuation for \(l_1\)-minimization: methodology and convergence. SIAM J Optim 19(3):1107–1130MathSciNetMATHCrossRef Hale E, Yin W, Zhang Y (2008) Fixed-point continuation for \(l_1\)-minimization: methodology and convergence. SIAM J Optim 19(3):1107–1130MathSciNetMATHCrossRef
44.
Zurück zum Zitat Beck A, Teboulle M (2009) A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J Imaging Sci 2(1):183–202MathSciNetMATHCrossRef Beck A, Teboulle M (2009) A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J Imaging Sci 2(1):183–202MathSciNetMATHCrossRef
45.
Zurück zum Zitat Rozell CJ, Johnson DH, Baraniuk RG, Olshausen BA (2008) Sparse coding via thresholding and local competition in neural circuits. Neural Comput 20(10):2526–2563MathSciNetCrossRef Rozell CJ, Johnson DH, Baraniuk RG, Olshausen BA (2008) Sparse coding via thresholding and local competition in neural circuits. Neural Comput 20(10):2526–2563MathSciNetCrossRef
47.
Zurück zum Zitat Gribonval R, Nielsen M (2007) Highly sparse representations from dictionaries are unique and independent of the sparseness measure. Appl Comput Harmon Anal 22(3):335–355MathSciNetMATHCrossRef Gribonval R, Nielsen M (2007) Highly sparse representations from dictionaries are unique and independent of the sparseness measure. Appl Comput Harmon Anal 22(3):335–355MathSciNetMATHCrossRef
48.
Zurück zum Zitat Clarke FH (1983) Optimization and nonsmooth analysis. Wiley, New YorkMATH Clarke FH (1983) Optimization and nonsmooth analysis. Wiley, New YorkMATH
49.
Zurück zum Zitat Bandeira A, Dobriban E, Mixon D, Sawin W (2013) Certifying the restricted isometry property is hard. IEEE Trans Inform Theory 59(6):3448–3450MathSciNetMATHCrossRef Bandeira A, Dobriban E, Mixon D, Sawin W (2013) Certifying the restricted isometry property is hard. IEEE Trans Inform Theory 59(6):3448–3450MathSciNetMATHCrossRef
50.
Zurück zum Zitat Slotine JJE, Li W (1991) Applied nonlinear control. Englewood Cliffs, Prentice-HallMATH Slotine JJE, Li W (1991) Applied nonlinear control. Englewood Cliffs, Prentice-HallMATH
Metadaten
Titel
KKT condition-based smoothing recurrent neural network for nonsmooth nonconvex optimization in compressed sensing
verfasst von
Dan Wang
Zhuhong Zhang
Publikationsdatum
06.11.2017
Verlag
Springer London
Erschienen in
Neural Computing and Applications / Ausgabe 7/2019
Print ISSN: 0941-0643
Elektronische ISSN: 1433-3058
DOI
https://doi.org/10.1007/s00521-017-3239-6

Weitere Artikel der Ausgabe 7/2019

Neural Computing and Applications 7/2019 Zur Ausgabe

Theory and Applications of Soft Computing Methods

Attraction and diffusion in nature-inspired optimization algorithms

Premium Partner