Skip to main content
Erschienen in: Neural Computing and Applications 9/2020

10.03.2020 | S.I. : Emerging Trends of Applied Neural Computation - E_TRAINCO

Evaluating graph resilience with tensor stack networks: a Keras implementation

verfasst von: Georgios Drakopoulos, Phivos Mylonas

Erschienen in: Neural Computing and Applications | Ausgabe 9/2020

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In communication networks resilience or structural coherency, namely the ability to maintain total connectivity even after some data links are lost for an indefinite time, is a major design consideration. Evaluating resilience is a computationally challenging task since it often requires examining a prohibitively high number of connections or of node combinations, depending on the structural coherency definition. In order to study resilience, communication systems are treated in an abstract level as graphs where the existence of an edge depends heavily on the local connectivity properties between the two nodes. Once the graph is derived, its resilience is evaluated by a tensor stack network (TSN). TSN is an emerging deep learning classification methodology for big data which can be expressed either as stacked vectors or as matrices, such as images or oversampled data from multiple-input and multiple-output digital communication systems. As their collective name suggests, the architecture of TSNs is based on tensors, namely higher-dimensional vectors, which simulate the simultaneous training of a cluster of ordinary multilayer feedforward neural networks (FFNNs). In the TSN structure the FFNNs are also interconnected and, thus, at certain steps of the training process they learn from the errors of each other. An additional advantage of the TSN training process is that it is regularized, resulting in parsimonious classifiers. The TSNs are trained to evaluate how resilient a graph is, where the real structural strength is assessed through three established resiliency metrics, namely the Estrada index, the odd Estrada index, and the clustering coefficient. Although the approach of modelling the communication system exclusively in structural terms is function oblivious, it can be applied to virtually any type of communication network independently of the underlying technology. The classification achieved by four configurations of TSNs is evaluated through six metrics, including the F1 metric as well as the type I and type II errors, derived from the corresponding contingency tables. Moreover, the effects of sparsifying the synaptic weights resulting from the training process are explored for various thresholds. Results indicate that the proposed method achieves a very high accuracy, while it is considerably faster than the computation of each of the three resilience metrics. Concerning sparsification, after a threshold the accuracy drops, meaning that the TSNs cannot be further sparsified. Thus, their training is very efficient in that respect.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Abadi M (2016a) TensorFlow: learning functions at scale. ACM SIGPLAN Not. 51(9):1–1 Abadi M (2016a) TensorFlow: learning functions at scale. ACM SIGPLAN Not. 51(9):1–1
2.
Zurück zum Zitat Mea A (2016b) TensorFlow: a system for large-scale machine learning. OSDI 16:265–283 Mea A (2016b) TensorFlow: a system for large-scale machine learning. OSDI 16:265–283
3.
Zurück zum Zitat Alenazi MJ, Sterbenz JP (2015) Comprehensive comparison and accuracy of graph metrics in predicting network resilience. In: DRCN, IEEE, pp 157–164 Alenazi MJ, Sterbenz JP (2015) Comprehensive comparison and accuracy of graph metrics in predicting network resilience. In: DRCN, IEEE, pp 157–164
4.
Zurück zum Zitat Bengua JA, Phien HN, Tuan HD (2015) Optimal feature extraction and classification of tensors via matrix product state decomposition. In: ICBD, IEEE, pp 669–672 Bengua JA, Phien HN, Tuan HD (2015) Optimal feature extraction and classification of tensors via matrix product state decomposition. In: ICBD, IEEE, pp 669–672
5.
Zurück zum Zitat Benson AR, Gleich DF, Leskovec J (2015) Tensor spectral clustering for partitioning higher-order network structures. In: ICDM, SIAM, pp 118–126 Benson AR, Gleich DF, Leskovec J (2015) Tensor spectral clustering for partitioning higher-order network structures. In: ICDM, SIAM, pp 118–126
6.
Zurück zum Zitat Bergstra J et al (2011) Theano: Deep learning on GPUs with Python. In: NIPS BigLearning workshop vol 3, pp 1–48 Bergstra J et al (2011) Theano: Deep learning on GPUs with Python. In: NIPS BigLearning workshop vol 3, pp 1–48
7.
Zurück zum Zitat Biguesh M, Gershman AB (2006) Training-based MIMO channel estimation: a study of estimator tradeoffs and optimal training signals. IEEE Trans Signal Process 54(3):884–893MATH Biguesh M, Gershman AB (2006) Training-based MIMO channel estimation: a study of estimator tradeoffs and optimal training signals. IEEE Trans Signal Process 54(3):884–893MATH
8.
Zurück zum Zitat Bishop CM (1995) Training with noise is equivalent to Tikhonov regularization. Neural Comput 7(1):108–116 Bishop CM (1995) Training with noise is equivalent to Tikhonov regularization. Neural Comput 7(1):108–116
9.
Zurück zum Zitat Blackmore S (2000) The meme machine. Oxford Universtiy Press, Oxford Blackmore S (2000) The meme machine. Oxford Universtiy Press, Oxford
10.
Zurück zum Zitat Chandrasekhar AG, Jackson MO (2014) Tractable and consistent random graph models. Technical report, National Bureau of Economic Research Chandrasekhar AG, Jackson MO (2014) Tractable and consistent random graph models. Technical report, National Bureau of Economic Research
11.
Zurück zum Zitat Collobert R, Kavukcuoglu K, Farabet C (2011) torch7: A MATLAB-like environment for machine learning. In: BigLearn, NIPS workshop Collobert R, Kavukcuoglu K, Farabet C (2011) torch7: A MATLAB-like environment for machine learning. In: BigLearn, NIPS workshop
13.
Zurück zum Zitat Deng L, Yu D (2011) Deep convex net: A scalable architecture for speech pattern classification. In: Twelfth annual conference of the International Speech Communication Association Deng L, Yu D (2011) Deep convex net: A scalable architecture for speech pattern classification. In: Twelfth annual conference of the International Speech Communication Association
14.
Zurück zum Zitat Deng L, Hutchinson B, Yu D (2012) Parallel training for deep stacking networks. In: Thirteenth annual conference of the International Speech Communication Association Deng L, Hutchinson B, Yu D (2012) Parallel training for deep stacking networks. In: Thirteenth annual conference of the International Speech Communication Association
15.
Zurück zum Zitat Deng L, He X, Gao J (2013) Deep stacking networks for information retrieval. In: ICASSP, IEEE Deng L, He X, Gao J (2013) Deep stacking networks for information retrieval. In: ICASSP, IEEE
16.
Zurück zum Zitat Deng L (2013) Recent advances in deep learning for speech research at Microsoft. In: ICASSP, IEEE Deng L (2013) Recent advances in deep learning for speech research at Microsoft. In: ICASSP, IEEE
17.
Zurück zum Zitat Drakopoulos G, Gourgaris P, Kanavos A, Makris C (2016a) A fuzzy graph framework for initializing k-means. IJAIT 25(6):1–21 Drakopoulos G, Gourgaris P, Kanavos A, Makris C (2016a) A fuzzy graph framework for initializing k-means. IJAIT 25(6):1–21
18.
Zurück zum Zitat Drakopoulos G, Kontopoulos S, Makris C (2016) Eventually consistent cardinality estimation with applications in biodata mining. In: SAC, ACM Drakopoulos G, Kontopoulos S, Makris C (2016) Eventually consistent cardinality estimation with applications in biodata mining. In: SAC, ACM
19.
Zurück zum Zitat Drakopoulos G, Kanavos A, Karydis I, Sioutas S, Vrahatis AG (2017) Tensor-based semantically-aware topic clustering of biomedical documents. Computation 5(3):34 Drakopoulos G, Kanavos A, Karydis I, Sioutas S, Vrahatis AG (2017) Tensor-based semantically-aware topic clustering of biomedical documents. Computation 5(3):34
20.
Zurück zum Zitat Drakopoulos G, Kanavos A, Mylonas P, Sioutas S (2017) Defining and evaluating Twitter influence metrics: a higher order approach in Neo4j. SNAM 71(1):52 Drakopoulos G, Kanavos A, Mylonas P, Sioutas S (2017) Defining and evaluating Twitter influence metrics: a higher order approach in Neo4j. SNAM 71(1):52
21.
Zurück zum Zitat Drakopoulos G, Kanavos A, Tsolis D, Mylonas P, Sioutas S (2017) Towards a framework for tensor ontologies over Neo4j: representations and operations. In: IISA Drakopoulos G, Kanavos A, Tsolis D, Mylonas P, Sioutas S (2017) Towards a framework for tensor ontologies over Neo4j: representations and operations. In: IISA
22.
Zurück zum Zitat Drakopoulos G, Liapakis X, Tzimas G, Mylonas P (2018) A graph resilience metric based on paths: higher order analytics with GPU. In: ICTAI, IEEE Drakopoulos G, Liapakis X, Tzimas G, Mylonas P (2018) A graph resilience metric based on paths: higher order analytics with GPU. In: ICTAI, IEEE
23.
Zurück zum Zitat Drakopoulos G, Stathopoulou F, Kanavos A, Paraskevas M, Tzimas G, Mylonas P, Iliadis L (2019) A genetic algorithm for spatiosocial tensor clustering: exploiting TensorFlow potential. Evol Syst Drakopoulos G, Stathopoulou F, Kanavos A, Paraskevas M, Tzimas G, Mylonas P, Iliadis L (2019) A genetic algorithm for spatiosocial tensor clustering: exploiting TensorFlow potential. Evol Syst
24.
Zurück zum Zitat Dunlavy DM, Kolda TG, Acar E (2010) Poblano v1. 0: A MATLAB toolbox for gradient-based optimization Dunlavy DM, Kolda TG, Acar E (2010) Poblano v1. 0: A MATLAB toolbox for gradient-based optimization
25.
Zurück zum Zitat Estrada E, Higham DJ (2010) Network properties revealed through matrix functions. SIAM Rev 52(4):696–714MathSciNetMATH Estrada E, Higham DJ (2010) Network properties revealed through matrix functions. SIAM Rev 52(4):696–714MathSciNetMATH
26.
Zurück zum Zitat Fisher DH (1987) Knowledge acquisition via incremental conceptual clustering. Mach Learn 2(2):139–172 Fisher DH (1987) Knowledge acquisition via incremental conceptual clustering. Mach Learn 2(2):139–172
27.
Zurück zum Zitat Golub GH, Hansen PC, O’Leary DP (1999) Tikhonov regularization and total least squares. J Matrix Anal Appl 21(1):185–194MathSciNetMATH Golub GH, Hansen PC, O’Leary DP (1999) Tikhonov regularization and total least squares. J Matrix Anal Appl 21(1):185–194MathSciNetMATH
28.
Zurück zum Zitat Goodman DF, Brette R (2009) The brian simulator. Front Neurosci 3(2):192 Goodman DF, Brette R (2009) The brian simulator. Front Neurosci 3(2):192
29.
Zurück zum Zitat Grubb A, Bagnell JA (2013) Stacked training for overfitting avoidance in deep networks. In: ICML workshops, p 1 Grubb A, Bagnell JA (2013) Stacked training for overfitting avoidance in deep networks. In: ICML workshops, p 1
30.
Zurück zum Zitat Gulli A, Pal S (2017) Deep learning with keras. PACKT Publishing Ltd, Birmingham Gulli A, Pal S (2017) Deep learning with keras. PACKT Publishing Ltd, Birmingham
31.
Zurück zum Zitat Ho TY, Lam PM, Leung CS (2008) Parallelization of cellular neural networks on GPU. Pattern Recognit 41(8):2684–2692MATH Ho TY, Lam PM, Leung CS (2008) Parallelization of cellular neural networks on GPU. Pattern Recognit 41(8):2684–2692MATH
32.
Zurück zum Zitat Hutchinson B, Deng L, Yu D (2013) Tensor deep stacking networks. TPAMI 35(8):1944–1957 Hutchinson B, Deng L, Yu D (2013) Tensor deep stacking networks. TPAMI 35(8):1944–1957
33.
Zurück zum Zitat Ip WH, Wang D (2011) Resilience and friability of transportation networks: evaluation, analysis and optimization. IEEE Syst J 5(2):189–198 Ip WH, Wang D (2011) Resilience and friability of transportation networks: evaluation, analysis and optimization. IEEE Syst J 5(2):189–198
34.
Zurück zum Zitat Jang H, Park A, Jung K (2008) Neural network implementation using CUDA and OpenMP. In: DICTA’08, IEEE, pp 155–161 Jang H, Park A, Jung K (2008) Neural network implementation using CUDA and OpenMP. In: DICTA’08, IEEE, pp 155–161
35.
Zurück zum Zitat Jia Y (2014) Caffe: convolutional architecture for fast feature embedding. In: International conference on multimedia. ACM, pp 675–678 Jia Y (2014) Caffe: convolutional architecture for fast feature embedding. In: International conference on multimedia. ACM, pp 675–678
36.
Zurück zum Zitat Kanavos A, Drakopoulos G, Tsakalidis A (2017) Graph community discovery algorithms in Neo4j with a regularization-based evaluation metric. In: WEBIST Kanavos A, Drakopoulos G, Tsakalidis A (2017) Graph community discovery algorithms in Neo4j with a regularization-based evaluation metric. In: WEBIST
39.
Zurück zum Zitat Kontopoulos S, Drakopoulos G (2014) A space efficient scheme for graph representation. In: ICTAI, IEEE Kontopoulos S, Drakopoulos G (2014) A space efficient scheme for graph representation. In: ICTAI, IEEE
40.
Zurück zum Zitat Kumar R, Sahni A, Marwah D (2015) Real time big data analytics dependence on network monitoring solutions using tensor networks and its decompositions. Netw Complex Syst 5(2) Kumar R, Sahni A, Marwah D (2015) Real time big data analytics dependence on network monitoring solutions using tensor networks and its decompositions. Netw Complex Syst 5(2)
41.
Zurück zum Zitat Larsson EG et al (2014) Massive MIMO for next generation wireless systems. IEEE Commun Mag 52(2):186–195 Larsson EG et al (2014) Massive MIMO for next generation wireless systems. IEEE Commun Mag 52(2):186–195
42.
Zurück zum Zitat Jea L (2010) Kronecker graphs: an approach to modeling networks. JMLR 11:985–1042MathSciNet Jea L (2010) Kronecker graphs: an approach to modeling networks. JMLR 11:985–1042MathSciNet
43.
Zurück zum Zitat Li J, Chang H, Yang J (2015) Sparse deep stacking network for image classification. In: AAAI, pp 3804–3810 Li J, Chang H, Yang J (2015) Sparse deep stacking network for image classification. In: AAAI, pp 3804–3810
44.
Zurück zum Zitat Li L, Boulware D (2015) High-order tensor decomposition for large-scale data analysis. In: ICBD, IEEE, pp 665–668 Li L, Boulware D (2015) High-order tensor decomposition for large-scale data analysis. In: ICBD, IEEE, pp 665–668
45.
Zurück zum Zitat Liberti JC, Rappaport TS (1996) A geometrically based model for line-of-sight multipath radio channels. Veh Technol Conf 2:844–848 Liberti JC, Rappaport TS (1996) A geometrically based model for line-of-sight multipath radio channels. Veh Technol Conf 2:844–848
46.
Zurück zum Zitat Lin S et al (2016) ATPC: adaptive transmission power control for wireless sensor networks. TOSN 12(1):6MathSciNet Lin S et al (2016) ATPC: adaptive transmission power control for wireless sensor networks. TOSN 12(1):6MathSciNet
47.
Zurück zum Zitat Loguinov D, Casas J, Wang X (2005) Graph-theoretic analysis of structured peer-to-peer systems: routing distances and fault resilience. IEEE/ACM TON 13(5):1107–1120 Loguinov D, Casas J, Wang X (2005) Graph-theoretic analysis of structured peer-to-peer systems: routing distances and fault resilience. IEEE/ACM TON 13(5):1107–1120
48.
Zurück zum Zitat Loyka SL (2001) Channel capacity of MIMO architecture using the exponential correlation matrix. IEEE Commun Lett 5(9):369–371 Loyka SL (2001) Channel capacity of MIMO architecture using the exponential correlation matrix. IEEE Commun Lett 5(9):369–371
49.
Zurück zum Zitat Lusher D, Koskinen J, Robins G (2013) Exponential random graph models for social networks: theory, methods, and applications. Cambridge University Press, Cambridge Lusher D, Koskinen J, Robins G (2013) Exponential random graph models for social networks: theory, methods, and applications. Cambridge University Press, Cambridge
50.
Zurück zum Zitat Malewicz G (2010) Pregel: a system for large-scale graph processing. In: CIKM, ACM, pp 135–146 Malewicz G (2010) Pregel: a system for large-scale graph processing. In: CIKM, ACM, pp 135–146
51.
Zurück zum Zitat Matthews DG (2017) GPflow: a Gaussian process library using tensorflow. JMLR 18(1):1299–1304MathSciNetMATH Matthews DG (2017) GPflow: a Gaussian process library using tensorflow. JMLR 18(1):1299–1304MathSciNetMATH
52.
53.
Zurück zum Zitat Nageswaran JM (2009) A configurable simulation environment for the efficient simulation of large-scale spiking neural networks on graphics processors. Neural Netw 22(5):791–800 Nageswaran JM (2009) A configurable simulation environment for the efficient simulation of large-scale spiking neural networks on graphics processors. Neural Netw 22(5):791–800
54.
Zurück zum Zitat Najjar W, Gaudiot JL (1990) Network resilience: a measure of network fault tolerance. ToC 2(1):174–181 Najjar W, Gaudiot JL (1990) Network resilience: a measure of network fault tolerance. ToC 2(1):174–181
55.
Zurück zum Zitat Ngo HQ, Larsson EG, Marzetta TL (2013) Energy and spectral efficiency of very large multiuser MIMO systems. ToC 61(4):1436–1449 Ngo HQ, Larsson EG, Marzetta TL (2013) Energy and spectral efficiency of very large multiuser MIMO systems. ToC 61(4):1436–1449
56.
Zurück zum Zitat Oh KS, Jung K (2004) GPU implementation of neural networks. Pattern Recognit 37(6):1311–1314MATH Oh KS, Jung K (2004) GPU implementation of neural networks. Pattern Recognit 37(6):1311–1314MATH
57.
Zurück zum Zitat Palangi H, Ward RK, Deng L (2013) Using deep stacking network to improve structured compressed sensing with multiple measurement vectors. In: ICASSP, pp 3337–3341 Palangi H, Ward RK, Deng L (2013) Using deep stacking network to improve structured compressed sensing with multiple measurement vectors. In: ICASSP, pp 3337–3341
58.
Zurück zum Zitat Papalexakis EE, Faloutsos C (2015) Fast efficient and scalable core consistency diagnostic for the PARAFAC decomposition for big sparse tensors. In: ICASSP, pp 5441–5445 Papalexakis EE, Faloutsos C (2015) Fast efficient and scalable core consistency diagnostic for the PARAFAC decomposition for big sparse tensors. In: ICASSP, pp 5441–5445
59.
Zurück zum Zitat Papalexakis EE, Pelechrinis K, Faloutsos C (2014) Spotting misbehaviors in location-based social networks using tensors. In: WWW, pp 551–552 Papalexakis EE, Pelechrinis K, Faloutsos C (2014) Spotting misbehaviors in location-based social networks using tensors. In: WWW, pp 551–552
60.
Zurück zum Zitat Pellionisz A, Llinás R (1979) Brain modeling by tensor network theory and computer simulation. The cerebellum: Distributed processor for predictive coordination. Neuroscience 4(3):323–348 Pellionisz A, Llinás R (1979) Brain modeling by tensor network theory and computer simulation. The cerebellum: Distributed processor for predictive coordination. Neuroscience 4(3):323–348
61.
Zurück zum Zitat Priest DM (1991) Algorithms for arbitrary precision floating point arithmetic. In: Tenth symposium on computer arithmetic. IEEE, pp 132–143 Priest DM (1991) Algorithms for arbitrary precision floating point arithmetic. In: Tenth symposium on computer arithmetic. IEEE, pp 132–143
62.
Zurück zum Zitat Hea R (1992) Neural computation and self-organizing maps: an introduction. Addison-Wesley Reading, Boston Hea R (1992) Neural computation and self-organizing maps: an introduction. Addison-Wesley Reading, Boston
63.
Zurück zum Zitat Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117 Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117
64.
Zurück zum Zitat Seshadhri C, Pinar A, Kolda TG (2011) An in-depth study of stochastic Kronecker graphs. In: ICDM, SIAM, pp 587–596 Seshadhri C, Pinar A, Kolda TG (2011) An in-depth study of stochastic Kronecker graphs. In: ICDM, SIAM, pp 587–596
65.
Zurück zum Zitat Seshadhri C, Pinar A, Kolda TG (2013) An in-depth analysis of stochastic Kronecker graphs. JACM 60(2):13MathSciNetMATH Seshadhri C, Pinar A, Kolda TG (2013) An in-depth analysis of stochastic Kronecker graphs. JACM 60(2):13MathSciNetMATH
66.
Zurück zum Zitat Shi Y, Niranjan U, Anandkumar A, Cecka C (2016) Tensor contractions with extended BLAS kernels on CPU and GPU. In: HiPC, IEEE, pp 193–202 Shi Y, Niranjan U, Anandkumar A, Cecka C (2016) Tensor contractions with extended BLAS kernels on CPU and GPU. In: HiPC, IEEE, pp 193–202
67.
Zurück zum Zitat Sutskever I, Vinyals O, Le QV (2014) Sequence to sequence learning with neural networks. In: NIPS, pp 3104–3112 Sutskever I, Vinyals O, Le QV (2014) Sequence to sequence learning with neural networks. In: NIPS, pp 3104–3112
68.
Zurück zum Zitat Vasilescu MAO, Terzopoulos D (2002) Multilinear analysis of image ensembles: Tensorfaces. In: European conference on computer vision. Springer, pp 447–460 Vasilescu MAO, Terzopoulos D (2002) Multilinear analysis of image ensembles: Tensorfaces. In: European conference on computer vision. Springer, pp 447–460
69.
Zurück zum Zitat Vázquez A, Moreno Y (2003) Resilience to damage of graphs with degree correlations. Phys Rev E 67(1):15–101 Vázquez A, Moreno Y (2003) Resilience to damage of graphs with degree correlations. Phys Rev E 67(1):15–101
70.
Zurück zum Zitat Vedaldi A, Lenc K (2015) Matconvnet: Convolutional neural networks for MATLAB. In: International conference on multimedia. ACM, pp 689–692 Vedaldi A, Lenc K (2015) Matconvnet: Convolutional neural networks for MATLAB. In: International conference on multimedia. ACM, pp 689–692
71.
Zurück zum Zitat Vervliet N, Debals O, De Lathauwer L (2016) TensorLab 3.0—numerical optimization strategies for large-scale constrained and coupled matrix-tensor factorization. In: Asilomar conference on signals, systems and computers. IEEE, pp 1733–1738 Vervliet N, Debals O, De Lathauwer L (2016) TensorLab 3.0—numerical optimization strategies for large-scale constrained and coupled matrix-tensor factorization. In: Asilomar conference on signals, systems and computers. IEEE, pp 1733–1738
72.
Zurück zum Zitat Wang M et al (2018) Disentangling the modes of variation in unlabelled data. TPAMI 40(11):2682–2695 Wang M et al (2018) Disentangling the modes of variation in unlabelled data. TPAMI 40(11):2682–2695
73.
Zurück zum Zitat Wolpert DH (1992) Stacked generalization. Neural Netw 5(2):241–259 Wolpert DH (1992) Stacked generalization. Neural Netw 5(2):241–259
74.
Zurück zum Zitat Wong D, Cox DC (1999) Estimating local mean signal power level in a Rayleigh fading environment. TVT 48(3):956–959 Wong D, Cox DC (1999) Estimating local mean signal power level in a Rayleigh fading environment. TVT 48(3):956–959
75.
Zurück zum Zitat Wongsuphasawat K (2018) Visualizing dataflow graphs of deep learning models in TensorFlow. Trans Vis Comput Graph 24(1):1–12 Wongsuphasawat K (2018) Visualizing dataflow graphs of deep learning models in TensorFlow. Trans Vis Comput Graph 24(1):1–12
76.
Zurück zum Zitat Yu D, Deng L, Seide F (2013) The deep tensor neural network with applications to large vocabulary speech recognition. Trans Audio Speech Language Process 21(2):388–396 Yu D, Deng L, Seide F (2013) The deep tensor neural network with applications to large vocabulary speech recognition. Trans Audio Speech Language Process 21(2):388–396
77.
Zurück zum Zitat Zeng R, Wu J, Senhadji L, Shu H (2015) Tensor object classification via multilinear discriminant analysis network. In: ICASSP, IEEE, pp 1971–1975 Zeng R, Wu J, Senhadji L, Shu H (2015) Tensor object classification via multilinear discriminant analysis network. In: ICASSP, IEEE, pp 1971–1975
Metadaten
Titel
Evaluating graph resilience with tensor stack networks: a Keras implementation
verfasst von
Georgios Drakopoulos
Phivos Mylonas
Publikationsdatum
10.03.2020
Verlag
Springer London
Erschienen in
Neural Computing and Applications / Ausgabe 9/2020
Print ISSN: 0941-0643
Elektronische ISSN: 1433-3058
DOI
https://doi.org/10.1007/s00521-020-04790-1

Weitere Artikel der Ausgabe 9/2020

Neural Computing and Applications 9/2020 Zur Ausgabe

Emerging Trends of Applied Neural Computation - E_TRAINCO

Features extraction from human eye movements via echo state network

Cognitive Computing for Intelligent Application and Service

Distributed representation learning via node2vec for implicit feedback recommendation

Premium Partner