Skip to main content
Top
Published in: Optical Memory and Neural Networks 1/2021

01-01-2021

Encoding and Decoding of Recursive Structures in Neural-Symbolic Systems

Author: A. Demidovskij

Published in: Optical Memory and Neural Networks | Issue 1/2021

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

One of the ways to join the connectionist approach and the symbolic paradigm is Tensor Product Variable Binding. It was initially devoted to building distributed representation of recursive structures for neural networks to use it as the input. Structures are an essential part of both formal and natural languages and appear in syntactic trees, grammar, semantic interpretation. A human mind smoothly operates with the appearing problems on the neural level, and it is naturally scalable and robust. The question arises of whether it is possible to translate traditional symbolic algorithms to the sub-symbolic level to reuse performance and computational gain of the neural networks for general tasks. However, several aspects of Tensor Product Variable Binding lack attention in public research, especially in building such a neural architecture that performs computations according to the mathematical model without preliminary training. In this paper, those implementation aspects are addressed. A proposed novel design for the decoding network translates a tensor to a corresponding recursive structure with the arbitrary level of nesting. Also, several complex topics about encoding such structures in the distributed representation or tensor are addressed. Both encoding and decoding neural networks are built with the Keras framework’s help and are analyzed from the perspective of applied value. The proposed design continues the series of papers dedicated to building a robust bridge between two computational paradigms: connectionist and symbolic.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., et al., TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. http://tensorflow.org/. Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., et al., TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. http://​tensorflow.​org/​.​
2.
go back to reference Besold, T.R. and Kühnberger, K.U., Towards integrated neural—symbolic systems for human-level ai: Two research programs helping to bridge the gaps, Biol. Inspired Cognit. Archit., 2015, vol. 14, pp. 97–110. Besold, T.R. and Kühnberger, K.U., Towards integrated neural—symbolic systems for human-level ai: Two research programs helping to bridge the gaps, Biol. Inspired Cognit. Archit., 2015, vol. 14, pp. 97–110.
3.
go back to reference Blacoe, W. and Lapata, M., A comparison of vector-based representations for semantic composition, in Proc. of the 2012 Joint Conf. on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, Association for Computational Linguistics, 2012, pp. 546–556. Blacoe, W. and Lapata, M., A comparison of vector-based representations for semantic composition, in Proc. of the 2012 Joint Conf. on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, Association for Computational Linguistics, 2012, pp. 546–556.
4.
go back to reference Browne, A. and Sun, R., Connectionist inference models, Neural Networks, 2001, vol. 14, no. 10, pp. 1331–1355.CrossRef Browne, A. and Sun, R., Connectionist inference models, Neural Networks, 2001, vol. 14, no. 10, pp. 1331–1355.CrossRef
5.
go back to reference Cheng, J., Wang, Z., Wen, J.R., Yan, J., and Chen, Z., Contextual text understanding in distributional semantic space, in Proc. of the 24th ACM Int. on Conf. on Information and Knowledge Management, ACM, 2015, pp. 133–142. Cheng, J., Wang, Z., Wen, J.R., Yan, J., and Chen, Z., Contextual text understanding in distributional semantic space, in Proc. of the 24th ACM Int. on Conf. on Information and Knowledge Management, ACM, 2015, pp. 133–142.
6.
go back to reference Cheng, P., Zhou, B., Chen, Z., and Tan, J., The topsis method for decision making with 2-tuple linguistic intuitionistic fuzzy sets, in 2017 IEEE 2nd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), IEEE, 2017, pp. 1603–1607. Cheng, P., Zhou, B., Chen, Z., and Tan, J., The topsis method for decision making with 2-tuple linguistic intuitionistic fuzzy sets, in 2017 IEEE 2nd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), IEEE, 2017, pp. 1603–1607.
7.
go back to reference Chollet, F. et al., Keras, 2015. https://keras.io. Chollet, F. et al., Keras, 2015. https://​keras.​io.​
8.
go back to reference Collier, M. and Beel, J., Implementing neural turing machines, in Int. Conf. on Artificial Neural Networks, Springer, 2018, pp. 94–104. Collier, M. and Beel, J., Implementing neural turing machines, in Int. Conf. on Artificial Neural Networks, Springer, 2018, pp. 94–104.
9.
go back to reference Demidovskij, A., Implementation aspects of tensor product variable binding in connectionist systems, in Proc. of SAI Intelligent Systems Conference, Springer, 2019, pp. 97–110. Demidovskij, A., Implementation aspects of tensor product variable binding in connectionist systems, in Proc. of SAI Intelligent Systems Conference, Springer, 2019, pp. 97–110.
10.
go back to reference Demidovskij, A., Automatic construction of tensor product variable binding neural networks for neural-symbolic intelligent systems, in Proc. of 2nd Int. Conf. on Electrical, Communication and Computer Engineering, IEEE, 2020 (not published). Demidovskij, A., Automatic construction of tensor product variable binding neural networks for neural-symbolic intelligent systems, in Proc. of 2nd Int. Conf. on Electrical, Communication and Computer Engineering, IEEE, 2020 (not published).
11.
go back to reference Demidovskij, A. and Babkin, E., Developing a distributed linguistic decision making system, Business Inform., 2019, vol. 13, no. 1. Demidovskij, A. and Babkin, E., Developing a distributed linguistic decision making system, Business Inform., 2019, vol. 13, no. 1.
12.
go back to reference Demidovskij, A. and Babkin, E., Designing a neural network primitive for conditional structural transformations, in Russian Conf. on Artificial Intelligence, Springer, 2020, pp. 117–133. Demidovskij, A. and Babkin, E., Designing a neural network primitive for conditional structural transformations, in Russian Conf. on Artificial Intelligence, Springer, 2020, pp. 117–133.
13.
go back to reference Demidovskij, A. and Babkin, E., Designing arithmetic neural primitive for sub-symbolic aggregation of linguistic assessments, J. Phys.: Conf. Ser., 2020, vol. 1680. Demidovskij, A. and Babkin, E., Designing arithmetic neural primitive for sub-symbolic aggregation of linguistic assessments, J. Phys.: Conf. Ser., 2020, vol. 1680.
14.
go back to reference Demidovskij, A.V., Towards automatic manipulation of arbitrary structures in connectivist paradigm with tensor product variable binding, in Int. Conf. on Neuroinformatics, Springer, 2019, pp. 375–383. Demidovskij, A.V., Towards automatic manipulation of arbitrary structures in connectivist paradigm with tensor product variable binding, in Int. Conf. on Neuroinformatics, Springer, 2019, pp. 375–383.
15.
go back to reference Demidovskij, A.V. and Babkin, E.A., Towards designing linguistic assessments aggregation as a distributed neuroalgorithm, in 2020 XXIII Int. Conf. on Soft Computing and Measurements (SCM), IEEE, 2020, pp. 161–164. Demidovskij, A.V. and Babkin, E.A., Towards designing linguistic assessments aggregation as a distributed neuroalgorithm, in 2020 XXIII Int. Conf. on Soft Computing and Measurements (SCM), IEEE, 2020, pp. 161–164.
16.
go back to reference Dettmers, T. and Zettlemoyer, L., Sparse networks from scratch: Faster training without losing performance, arXiv preprint, 2019. arXiv:1907.04840. Dettmers, T. and Zettlemoyer, L., Sparse networks from scratch: Faster training without losing performance, arXiv preprint, 2019. arXiv:1907.04840.
17.
go back to reference Evci, U., Gale, T., Menick, J., Castro, P.S., and Elsen, E., Rigging the lottery: Making all tickets winners, in Int. Conf. on Machine Learning, PMLR, 2020, pp. 2943–2952. Evci, U., Gale, T., Menick, J., Castro, P.S., and Elsen, E., Rigging the lottery: Making all tickets winners, in Int. Conf. on Machine Learning, PMLR, 2020, pp. 2943–2952.
18.
go back to reference Gallant, S.I. and Okaywe, T.W., Representing objects, relations, and sequences, Neural Comput., 2013, vol. 25, no. 8, pp. 2038–2078.MathSciNetCrossRef Gallant, S.I. and Okaywe, T.W., Representing objects, relations, and sequences, Neural Comput., 2013, vol. 25, no. 8, pp. 2038–2078.MathSciNetCrossRef
19.
go back to reference van Gelder, T., Distributed vs. local representation, PhD Thesis, Univ. Pittsburgh, 1999. van Gelder, T., Distributed vs. local representation, PhD Thesis, Univ. Pittsburgh, 1999.
20.
go back to reference Golmohammadi, D., Neural network application for fuzzy multi-criteria decision making problems, Int. J. Prod. Econ., 2011, vol. 131, no. 2, pp. 490–504.CrossRef Golmohammadi, D., Neural network application for fuzzy multi-criteria decision making problems, Int. J. Prod. Econ., 2011, vol. 131, no. 2, pp. 490–504.CrossRef
21.
go back to reference Graves, A., Wayne, G., and Danihelka, I., Neural turing machines, 2014. arXiv:1410.5401. Graves, A., Wayne, G., and Danihelka, I., Neural turing machines, 2014. arXiv:1410.5401.
22.
go back to reference Gray, S., Radford, A., and Kingma, D.P., Gpu kernels for block-sparse weights, 2017. arXiv:1711.09224 3. Gray, S., Radford, A., and Kingma, D.P., Gpu kernels for block-sparse weights, 2017. arXiv:1711.09224 3.
23.
go back to reference Kanerva, P., Hyperdimensional computing: An introduction to computing in distributed representation with high-dimensional random vectors, Cognit. Comput., 2009, vol. 1, no. 2, pp. 139–159.CrossRef Kanerva, P., Hyperdimensional computing: An introduction to computing in distributed representation with high-dimensional random vectors, Cognit. Comput., 2009, vol. 1, no. 2, pp. 139–159.CrossRef
24.
go back to reference Kleyko, D., Khan, S., Osipov, E., and Yong, S.P., Modality classification of medical images with distributed representations based on cellular automata reservoir computing, in 2017 IEEE 14th Int. Symp. on Biomedical Imaging (ISBI 2017), IEEE, 2017, pp. 1053–1056. Kleyko, D., Khan, S., Osipov, E., and Yong, S.P., Modality classification of medical images with distributed representations based on cellular automata reservoir computing, in 2017 IEEE 14th Int. Symp. on Biomedical Imaging (ISBI 2017), IEEE, 2017, pp. 1053–1056.
25.
go back to reference Kleyko, D., Rahimi, A., Rachkovskij, D.A., Osipov, E., and Rabaey, J.M., Classification and recall with binary hyperdimensional computing: Tradeoffs in choice of density and mapping characteristics, IEEE Trans. Neural Networks Learning Systems, 2018, vol. 29, no. 12, pp. 5880–5898.CrossRef Kleyko, D., Rahimi, A., Rachkovskij, D.A., Osipov, E., and Rabaey, J.M., Classification and recall with binary hyperdimensional computing: Tradeoffs in choice of density and mapping characteristics, IEEE Trans. Neural Networks Learning Systems, 2018, vol. 29, no. 12, pp. 5880–5898.CrossRef
26.
go back to reference Le, Q. and Mikolov, T., Distributed representations of sentences and documents, in Int. Conf. on Machine Learning, 2014, pp. 1188–1196. Le, Q. and Mikolov, T., Distributed representations of sentences and documents, in Int. Conf. on Machine Learning, 2014, pp. 1188–1196.
27.
go back to reference Legendre, G., Miyata, Y., and Smolensky, P., Harmonic Grammar: A Formal Multi-Level Connectionist Theory of Linguistic Well-Formedness, Theoretical Foundations, Citeseer, 1990. Legendre, G., Miyata, Y., and Smolensky, P., Harmonic Grammar: A Formal Multi-Level Connectionist Theory of Linguistic Well-Formedness, Theoretical Foundations, Citeseer, 1990.
28.
go back to reference Legendre, G., Miyata, Y., and Smolensky, P., Distributed recursive structure processing, Advances in Neural Information Processing Systems 3 (NIPS 1990), 1991, pp. 591–597. Legendre, G., Miyata, Y., and Smolensky, P., Distributed recursive structure processing, Advances in Neural Information Processing Systems 3 (NIPS 1990), 1991, pp. 591–597.
29.
go back to reference Marcus, G., The next decade in ai: four steps towards robust artificial intelligence, 2020. arXiv:2002.06177. Marcus, G., The next decade in ai: four steps towards robust artificial intelligence, 2020. arXiv:2002.06177.
30.
go back to reference Mikolov, T., Chen, K., Corrado, G., and Dean, J., Eficient estimation of word representations in vector space, 2013. arXiv:1301.3781. Mikolov, T., Chen, K., Corrado, G., and Dean, J., Eficient estimation of word representations in vector space, 2013. arXiv:1301.3781.
31.
go back to reference Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., and Lerer, A., Automatic Differentiation in Pytorch, 2017. Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., and Lerer, A., Automatic Differentiation in Pytorch, 2017.
32.
go back to reference Pinkas, G., Reasoning, nonmonotonicity and learning in connectionist networks that capture propositional knowledge, Artif. Intell., 1995, vol. 77, no. 2, pp. 203–247.MathSciNetCrossRef Pinkas, G., Reasoning, nonmonotonicity and learning in connectionist networks that capture propositional knowledge, Artif. Intell., 1995, vol. 77, no. 2, pp. 203–247.MathSciNetCrossRef
33.
go back to reference Pinkas, G., Lima, P., and Cohen, S., Representing, binding, retrieving and unifying relational knowledge using pools of neural binders, Biol. Inspired Cognit. Archit., 2013, vol. 6, pp. 87–95. Pinkas, G., Lima, P., and Cohen, S., Representing, binding, retrieving and unifying relational knowledge using pools of neural binders, Biol. Inspired Cognit. Archit., 2013, vol. 6, pp. 87–95.
34.
go back to reference Rumelhart, D.E., Hinton, G.E., McClelland, J.L., et al., A general framework for parallel distributed processing, Parallel Distrib. Process.: Explor. Microstruct. Cognit., 1986, vol. 1, no. 45–76, p. 26. Rumelhart, D.E., Hinton, G.E., McClelland, J.L., et al., A general framework for parallel distributed processing, Parallel Distrib. Process.: Explor. Microstruct. Cognit., 1986, vol. 1, no. 45–76, p. 26.
35.
go back to reference Rumelhart, D.E., McClelland, J.L., Group, P.R., et al., Parallel Distributed Processing, Vol. 1, Cambridge, MA: MIT Press, 1987. Rumelhart, D.E., McClelland, J.L., Group, P.R., et al., Parallel Distributed Processing, Vol. 1, Cambridge, MA: MIT Press, 1987.
36.
go back to reference Serafini, L. and Garcez, A.d., Logic tensor networks: Deep learning and logical reasoning from data and knowledge, 2016. arXiv:1606.04422. Serafini, L. and Garcez, A.d., Logic tensor networks: Deep learning and logical reasoning from data and knowledge, 2016. arXiv:1606.04422.
37.
go back to reference Smolensky, P., Tensor product variable binding and the representation of symbolic structures in connectionist systems, Artif. Intell., 1990, vol. 46, no. 1–2, pp. 159–216.MathSciNetCrossRef Smolensky, P., Tensor product variable binding and the representation of symbolic structures in connectionist systems, Artif. Intell., 1990, vol. 46, no. 1–2, pp. 159–216.MathSciNetCrossRef
38.
go back to reference Smolensky, P. and Legendre, G., The Harmonic Mind: From Neural Computation to Optimality-Theoretic Grammar (Cognitive Architecture), MIT Press, 2006, vol. 1. Smolensky, P. and Legendre, G., The Harmonic Mind: From Neural Computation to Optimality-Theoretic Grammar (Cognitive Architecture), MIT Press, 2006, vol. 1.
39.
go back to reference Smolensky, P., Legendre, G., and Miyata, Y., Integrating connectionist and symbolic computation for the theory of language, Curr. Sci., 1993, pp. 381–391. Smolensky, P., Legendre, G., and Miyata, Y., Integrating connectionist and symbolic computation for the theory of language, Curr. Sci., 1993, pp. 381–391.
40.
go back to reference Teso, S., Sebastiani, R., and Passerini, A., Structured learning modulo theories, Artif. Intell., 2017, vol. 244, pp. 166–187.MathSciNetCrossRef Teso, S., Sebastiani, R., and Passerini, A., Structured learning modulo theories, Artif. Intell., 2017, vol. 244, pp. 166–187.MathSciNetCrossRef
41.
go back to reference Wang, H., Dou, D., and Lowd, D., Ontology-based deep restricted boltzmann machine, in Int. Conf. on Database and Expert Systems Applications, Springer, 2016, pp. 431–445. Wang, H., Dou, D., and Lowd, D., Ontology-based deep restricted boltzmann machine, in Int. Conf. on Database and Expert Systems Applications, Springer, 2016, pp. 431–445.
42.
go back to reference Wei, C. and Liao, H., A multigranularity linguistic group decision-making method based on hesitant 2-tuple sets, Int. J. Intell. Syst., 2016, vol. 31, no. 6, pp. 612–634.CrossRef Wei, C. and Liao, H., A multigranularity linguistic group decision-making method based on hesitant 2-tuple sets, Int. J. Intell. Syst., 2016, vol. 31, no. 6, pp. 612–634.CrossRef
43.
go back to reference Widdows, D. and Cohen, T., Reasoning with vectors: A continuous model for fast robust inference, Logic J. IGPL, 2014, vol. 23, no. 2, pp. 141–173.MathSciNetCrossRef Widdows, D. and Cohen, T., Reasoning with vectors: A continuous model for fast robust inference, Logic J. IGPL, 2014, vol. 23, no. 2, pp. 141–173.MathSciNetCrossRef
Metadata
Title
Encoding and Decoding of Recursive Structures in Neural-Symbolic Systems
Author
A. Demidovskij
Publication date
01-01-2021
Publisher
Pleiades Publishing
Published in
Optical Memory and Neural Networks / Issue 1/2021
Print ISSN: 1060-992X
Electronic ISSN: 1934-7898
DOI
https://doi.org/10.3103/S1060992X21010033

Other articles of this Issue 1/2021

Optical Memory and Neural Networks 1/2021 Go to the issue

Premium Partner