Skip to main content
Top
Published in: Minds and Machines 4/2018

05-10-2018

Computational Functionalism for the Deep Learning Era

Author: Ezequiel López-Rubio

Published in: Minds and Machines | Issue 4/2018

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Deep learning is a kind of machine learning which happens in a certain type of artificial neural networks called deep networks. Artificial deep networks, which exhibit many similarities with biological ones, have consistently shown human-like performance in many intelligent tasks. This poses the question whether this performance is caused by such similarities. After reviewing the structure and learning processes of artificial and biological neural networks, we outline two important reasons for the success of deep learning, namely the extraction of successively higher level features and the multiple layer structure, which are closely related to each other. Then some indications about the framing of this heated debate are given. After that, an assessment of the value of artificial deep networks as models of the human brain is given from the similarity perspective of model representation. Finally, a new version of computational functionalism is proposed which addresses the specificity of deep neural computation better than classic, program based computational functionalism.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
go back to reference Bartels, A. (2006). Defending the structural concept of representation. THEORIA An International Journal for Theory, History and Foundations of Science, 21(1), 7–19.MathSciNetMATH Bartels, A. (2006). Defending the structural concept of representation. THEORIA An International Journal for Theory, History and Foundations of Science, 21(1), 7–19.MathSciNetMATH
go back to reference Bassett, D. S., & Mattar, M. G. (2017). A network neuroscience of human learning: Potential to inform quantitative theories of brain and behavior. Trends in Cognitive Sciences, 21(4), 250–264.CrossRef Bassett, D. S., & Mattar, M. G. (2017). A network neuroscience of human learning: Potential to inform quantitative theories of brain and behavior. Trends in Cognitive Sciences, 21(4), 250–264.CrossRef
go back to reference Blum, L., Shub, M., & Smale, S. (1989). On a theory of computation and complexity over the real numbers: NP-completeness, recursive functions and universal machines. Bulletin of the American Mathematical Society, 21(1), 1–46.MathSciNetMATHCrossRef Blum, L., Shub, M., & Smale, S. (1989). On a theory of computation and complexity over the real numbers: NP-completeness, recursive functions and universal machines. Bulletin of the American Mathematical Society, 21(1), 1–46.MathSciNetMATHCrossRef
go back to reference Bonfiglioli, R., & Nanni, F. (2016). History and philosophy of computing. In From close to distant and back: how to read with the help of machines (pp. 87–100). Springer, Cham. Bonfiglioli, R., & Nanni, F. (2016). History and philosophy of computing. In From close to distant and back: how to read with the help of machines (pp. 87–100). Springer, Cham.
go back to reference Bueno, O., French, S., & Ladyman, J. (2002). On representing the relationship between the mathematical and the empirical. Philosophy of Science, 69(3), 452–473.MathSciNetCrossRef Bueno, O., French, S., & Ladyman, J. (2002). On representing the relationship between the mathematical and the empirical. Philosophy of Science, 69(3), 452–473.MathSciNetCrossRef
go back to reference Cireşan, D., Meier, U., Masci, J., & Schmidhuber, J. (2011). A committee of neural networks for traffic sign classification. In The 2011 international joint conference on neural networks (pp. 1918–1921). Cireşan, D., Meier, U., Masci, J., & Schmidhuber, J. (2011). A committee of neural networks for traffic sign classification. In The 2011 international joint conference on neural networks (pp. 1918–1921).
go back to reference Cireşan, D., Meier, U., Masci, J., & Schmidhuber, J. (2012a). Multi-column deep neural network for traffic sign classification. Neural Networks, 32, 333–338.CrossRef Cireşan, D., Meier, U., Masci, J., & Schmidhuber, J. (2012a). Multi-column deep neural network for traffic sign classification. Neural Networks, 32, 333–338.CrossRef
go back to reference Cireşan, D., Meier, U., & Schmidhuber, J. (2012b). Multi-column deep neural networks for image classification. In Proceedings of the 2012 IEEE conference on computer vision and pattern recognition (CVPR), IEEE Computer Society, Washington, DC, USA, CVPR ’12 (pp. 3642–3649). Cireşan, D., Meier, U., & Schmidhuber, J. (2012b). Multi-column deep neural networks for image classification. In Proceedings of the 2012 IEEE conference on computer vision and pattern recognition (CVPR), IEEE Computer Society, Washington, DC, USA, CVPR ’12 (pp. 3642–3649).
go back to reference Dehaene, S., Meyniel, F., Wacongne, C., Wang, L., & Pallier, C. (2015). The neural representation of sequences: From transition probabilities to algebraic patterns and linguistic trees. Neuron, 88(1), 2–19.CrossRef Dehaene, S., Meyniel, F., Wacongne, C., Wang, L., & Pallier, C. (2015). The neural representation of sequences: From transition probabilities to algebraic patterns and linguistic trees. Neuron, 88(1), 2–19.CrossRef
go back to reference Fitch, W. T. (2014). Toward a computational framework for cognitive biology: Unifying approaches from cognitive neuroscience and comparative cognition. Physics of Life Reviews, 11(3), 329–364.CrossRef Fitch, W. T. (2014). Toward a computational framework for cognitive biology: Unifying approaches from cognitive neuroscience and comparative cognition. Physics of Life Reviews, 11(3), 329–364.CrossRef
go back to reference Giere, R. N. (2009). An agent-based conception of models and scientific representation. Synthese, 172(2), 269.CrossRef Giere, R. N. (2009). An agent-based conception of models and scientific representation. Synthese, 172(2), 269.CrossRef
go back to reference Gomes, L. (2014). Machine-learning maestro Michael Jordan on the delusions of big data and other huge engineering efforts. IEEE Spectrum 20 Oct 2014. Gomes, L. (2014). Machine-learning maestro Michael Jordan on the delusions of big data and other huge engineering efforts. IEEE Spectrum 20 Oct 2014.
go back to reference Hassabis, D., Kumaran, D., Summerfield, C., & Botvinick, M. (2017). Neuroscience-inspired artificial intelligence. Neuron, 95(2), 245–258.CrossRef Hassabis, D., Kumaran, D., Summerfield, C., & Botvinick, M. (2017). Neuroscience-inspired artificial intelligence. Neuron, 95(2), 245–258.CrossRef
go back to reference Hinton, G. (2014). Where do features come from? Cognitive Science, 38(6), 1078–1101.CrossRef Hinton, G. (2014). Where do features come from? Cognitive Science, 38(6), 1078–1101.CrossRef
go back to reference Hinton, G. E., Osindero, S., & Teh, Y. W. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18(7), 1527–1554.MathSciNetMATHCrossRef Hinton, G. E., Osindero, S., & Teh, Y. W. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18(7), 1527–1554.MathSciNetMATHCrossRef
go back to reference Holland, P. C., & Schiffino, F. L. (2016). Mini-review: Prediction errors, attention and associative learning. Neurobiology of Learning and Memory, 131, 207–215.CrossRef Holland, P. C., & Schiffino, F. L. (2016). Mini-review: Prediction errors, attention and associative learning. Neurobiology of Learning and Memory, 131, 207–215.CrossRef
go back to reference Hong, H., Yamins, D. L. K., Majaj, N. J., & DiCarlo, J. J. (2016). Explicit information for category-orthogonal object properties increases along the ventral stream. Nature Neuroscience, 19, 613–622.CrossRef Hong, H., Yamins, D. L. K., Majaj, N. J., & DiCarlo, J. J. (2016). Explicit information for category-orthogonal object properties increases along the ventral stream. Nature Neuroscience, 19, 613–622.CrossRef
go back to reference Khadivi, P., Tandon, R., & Ramakrishnan, N. (2016). Flow of information in feed-forward deep neural networks. arxiv:1603.06220v1. Khadivi, P., Tandon, R., & Ramakrishnan, N. (2016). Flow of information in feed-forward deep neural networks. arxiv:1603.06220v1.
go back to reference Kiani, R., Esteky, H., Mirpour, K., & Tanaka, K. (2007). Object category structure in response patterns of neuronal population in monkey inferior temporal cortex. Journal of Neurophysiology, 97(6), 4296–4309.CrossRef Kiani, R., Esteky, H., Mirpour, K., & Tanaka, K. (2007). Object category structure in response patterns of neuronal population in monkey inferior temporal cortex. Journal of Neurophysiology, 97(6), 4296–4309.CrossRef
go back to reference Kruger, N., Janssen, P., Kalkan, S., Lappe, M., Leonardis, A., Piater, J., et al. (2013). Deep hierarchies in the primate visual cortex: What can we learn for computer vision? IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1847–1871.CrossRef Kruger, N., Janssen, P., Kalkan, S., Lappe, M., Leonardis, A., Piater, J., et al. (2013). Deep hierarchies in the primate visual cortex: What can we learn for computer vision? IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1847–1871.CrossRef
go back to reference Legg, S., & Hutter, M. (2007). Universal intelligence: A definition of machine intelligence. Minds and Machines, 17(4), 391–444.CrossRef Legg, S., & Hutter, M. (2007). Universal intelligence: A definition of machine intelligence. Minds and Machines, 17(4), 391–444.CrossRef
go back to reference LeRoux, N., & Bengio, Y. (2008). Representational power of restricted Boltzmann machines and deep belief networks. Neural Computation, 20(6), 1631–1649.MathSciNetMATHCrossRef LeRoux, N., & Bengio, Y. (2008). Representational power of restricted Boltzmann machines and deep belief networks. Neural Computation, 20(6), 1631–1649.MathSciNetMATHCrossRef
go back to reference Levine, Y., Yakira, D., Cohen, N., & Shashua, A. (2017). Deep learning and quantum entanglement: Fundamental connections with implications to network design. arxiv:1704.01552. Levine, Y., Yakira, D., Cohen, N., & Shashua, A. (2017). Deep learning and quantum entanglement: Fundamental connections with implications to network design. arxiv:1704.01552.
go back to reference Lin, H. W., & Tegmark, M. (2016a). Critical behavior from deep dynamics: A hidden dimension in natural language. arxiv:1606.06737. Lin, H. W., & Tegmark, M. (2016a). Critical behavior from deep dynamics: A hidden dimension in natural language. arxiv:1606.06737.
go back to reference Lin, H. W., & Tegmark, M. (2016b). Why does deep and cheap learning work so well? arxiv:1608.08225. Lin, H. W., & Tegmark, M. (2016b). Why does deep and cheap learning work so well? arxiv:1608.08225.
go back to reference Maass, W. (1997). Networks of spiking neurons: The third generation of neural network models. Neural Networks, 10(9), 1659–1671.CrossRef Maass, W. (1997). Networks of spiking neurons: The third generation of neural network models. Neural Networks, 10(9), 1659–1671.CrossRef
go back to reference Mäki, U. (2009). MISSing the world. Models as isolations and credible surrogate systems. Erkenntnis, 70(1), 29–43.CrossRef Mäki, U. (2009). MISSing the world. Models as isolations and credible surrogate systems. Erkenntnis, 70(1), 29–43.CrossRef
go back to reference Mäki, U. (2011). Models and the locus of their truth. Synthese, 180(1), 47–63.CrossRef Mäki, U. (2011). Models and the locus of their truth. Synthese, 180(1), 47–63.CrossRef
go back to reference Marblestone, A. H., Wayne, G., & Kording, K. P. (2016). Toward an integration of deep learning and neuroscience. Frontiers in Computational Neuroscience, 10, 94.CrossRef Marblestone, A. H., Wayne, G., & Kording, K. P. (2016). Toward an integration of deep learning and neuroscience. Frontiers in Computational Neuroscience, 10, 94.CrossRef
go back to reference Mehta, P., & Schwab, D. J. (2014). An exact mapping between the variational renormalization group and deep learning. arxiv:1410.3831v1. Mehta, P., & Schwab, D. J. (2014). An exact mapping between the variational renormalization group and deep learning. arxiv:1410.3831v1.
go back to reference Mnih, V., Kavukcuoglu, K., & Silver, D. (2015). Human-level control through deep reinforcement learning. Nature, 518, 529–533.CrossRef Mnih, V., Kavukcuoglu, K., & Silver, D. (2015). Human-level control through deep reinforcement learning. Nature, 518, 529–533.CrossRef
go back to reference Parnas, D. L. (2014). On the significance of Turing’s test. Communications of the ACM, 57(12), 8.CrossRef Parnas, D. L. (2014). On the significance of Turing’s test. Communications of the ACM, 57(12), 8.CrossRef
go back to reference Parnas, D. L. (2017). The real risks of artificial intelligence. Communications of the ACM, 60(10), 27–31.CrossRef Parnas, D. L. (2017). The real risks of artificial intelligence. Communications of the ACM, 60(10), 27–31.CrossRef
go back to reference Patel, A. B., Nguyen, T., & Baraniuk, R. G. (2015). A probabilistic theory of deep learning. arxiv:1504.00641v1. Patel, A. B., Nguyen, T., & Baraniuk, R. G. (2015). A probabilistic theory of deep learning. arxiv:1504.00641v1.
go back to reference Piccinini, G. (2010). The mind as neural software? Understanding functionalism, computationalism, and computational functionalism. Philosophy and Phenomenological Research, 81(2), 269–311.CrossRef Piccinini, G. (2010). The mind as neural software? Understanding functionalism, computationalism, and computational functionalism. Philosophy and Phenomenological Research, 81(2), 269–311.CrossRef
go back to reference Piccinini, G., & Bahar, S. (2013). Neural computation and the computational theory of cognition. Cognitive Science, 37(3), 453–488.CrossRef Piccinini, G., & Bahar, S. (2013). Neural computation and the computational theory of cognition. Cognitive Science, 37(3), 453–488.CrossRef
go back to reference Piccinini, G., & Scarantino, A. (2011). Information processing, computation, and cognition. Journal of Biological Physics, 37(1), 1–38.CrossRef Piccinini, G., & Scarantino, A. (2011). Information processing, computation, and cognition. Journal of Biological Physics, 37(1), 1–38.CrossRef
go back to reference Poggio, T., Mhaskar, H., Rosasco, L., Miranda, B., & Liao, Q. (2017). Why and when can deep—but not shallow—networks avoid the curse of dimensionality: a review. arxiv:1611.00740. Poggio, T., Mhaskar, H., Rosasco, L., Miranda, B., & Liao, Q. (2017). Why and when can deep—but not shallow—networks avoid the curse of dimensionality: a review. arxiv:1611.00740.
go back to reference Quiroga, R. Q., Reddy, L., Koch, C., & Fried, I. (2007). Decoding visual inputs from multiple neurons in the human temporal lobe. Journal of Neurophysiology, 98(4), 1997–2007.CrossRef Quiroga, R. Q., Reddy, L., Koch, C., & Fried, I. (2007). Decoding visual inputs from multiple neurons in the human temporal lobe. Journal of Neurophysiology, 98(4), 1997–2007.CrossRef
go back to reference Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85–117.CrossRef Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85–117.CrossRef
go back to reference Sharir, O., & Shashua, A. (2017). On the expressive power of overlapping operations of deep networks. arxiv:1703.02065. Sharir, O., & Shashua, A. (2017). On the expressive power of overlapping operations of deep networks. arxiv:1703.02065.
go back to reference Silver, D., Schrittwieser, J., & Simonyan, K. (2017). Mastering the game of Go without human knowledge. Nature, 550, 354–359.CrossRef Silver, D., Schrittwieser, J., & Simonyan, K. (2017). Mastering the game of Go without human knowledge. Nature, 550, 354–359.CrossRef
go back to reference Stallkamp, J., Schlipsing, M., Salmen, J., & Igel, C. (2012). Man versus computer: Benchmarking machine learning algorithms for traffic sign recognition. Neural Networks, 32, 323–332.CrossRef Stallkamp, J., Schlipsing, M., Salmen, J., & Igel, C. (2012). Man versus computer: Benchmarking machine learning algorithms for traffic sign recognition. Neural Networks, 32, 323–332.CrossRef
go back to reference Tishby, N., & Zaslavsky, N. (2015). Deep learning and the information bottleneck principle. arxiv:1503.02406. Tishby, N., & Zaslavsky, N. (2015). Deep learning and the information bottleneck principle. arxiv:1503.02406.
go back to reference Trappenberg, T. P. (2014). Growing adaptive machines. In A brief introduction to probabilistic machine learning and its relation to neuroscience (pp. 61–108). Springer, Berlin. Trappenberg, T. P. (2014). Growing adaptive machines. In A brief introduction to probabilistic machine learning and its relation to neuroscience (pp. 61–108). Springer, Berlin.
go back to reference van Fraassen, B. C. (2008). Scientific representation: Paradoxes of perspective. Oxford: Clarendon Press.CrossRef van Fraassen, B. C. (2008). Scientific representation: Paradoxes of perspective. Oxford: Clarendon Press.CrossRef
go back to reference von Melchner, L., Pallas, S. L., & Sur, M. (2000). Visual behaviour mediated by retinal projections directed to the auditory pathway. Nature, 404, 871–876.CrossRef von Melchner, L., Pallas, S. L., & Sur, M. (2000). Visual behaviour mediated by retinal projections directed to the auditory pathway. Nature, 404, 871–876.CrossRef
go back to reference Voosen, P. (2015). The believers. Chronicle of Higher Education 61(24). Voosen, P. (2015). The believers. Chronicle of Higher Education 61(24).
go back to reference Weisberg, M. (2013). Simulation and similarity: Using models to understand the world. New York: Oxford University Press.CrossRef Weisberg, M. (2013). Simulation and similarity: Using models to understand the world. New York: Oxford University Press.CrossRef
go back to reference Weisberg, M. (2015). Biology and philosophy symposium on simulation and similarity: Using models to understand the world. Biology & Philosophy, 30(2), 299–310.CrossRef Weisberg, M. (2015). Biology and philosophy symposium on simulation and similarity: Using models to understand the world. Biology & Philosophy, 30(2), 299–310.CrossRef
go back to reference Weizenbaum, J. (1966). ELIZA—A computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45.CrossRef Weizenbaum, J. (1966). ELIZA—A computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45.CrossRef
go back to reference Wu, Y., Schuster, M., Chen, Z., Le, Q. V., Norouzi, M., et al. (2016). Google’s neural machine translation system: Bridging the gap between human and machine translation. arxiv:1609.08144v2. Wu, Y., Schuster, M., Chen, Z., Le, Q. V., Norouzi, M., et al. (2016). Google’s neural machine translation system: Bridging the gap between human and machine translation. arxiv:1609.08144v2.
go back to reference Yamins, D. L. K., & DiCarlo, J. J. (2016). Using goal-driven deep learning models to understand sensory cortex. Nature Neuroscience, 19(3), 356–365.CrossRef Yamins, D. L. K., & DiCarlo, J. J. (2016). Using goal-driven deep learning models to understand sensory cortex. Nature Neuroscience, 19(3), 356–365.CrossRef
go back to reference Yamins, D. L. K., Hong, H., Cadieu, C. F., Solomon, E. A., Seibert, D., & DiCarlo, J. J. (2014). Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the National Academy of Sciences, 111(23), 8619–8624.CrossRef Yamins, D. L. K., Hong, H., Cadieu, C. F., Solomon, E. A., Seibert, D., & DiCarlo, J. J. (2014). Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the National Academy of Sciences, 111(23), 8619–8624.CrossRef
go back to reference Yu, D., & Deng, L. (2011). Deep learning and its applications to signal and information processing. IEEE Signal Processing Magazine, 28(1), 145–154.CrossRef Yu, D., & Deng, L. (2011). Deep learning and its applications to signal and information processing. IEEE Signal Processing Magazine, 28(1), 145–154.CrossRef
Metadata
Title
Computational Functionalism for the Deep Learning Era
Author
Ezequiel López-Rubio
Publication date
05-10-2018
Publisher
Springer Netherlands
Published in
Minds and Machines / Issue 4/2018
Print ISSN: 0924-6495
Electronic ISSN: 1572-8641
DOI
https://doi.org/10.1007/s11023-018-9480-7

Other articles of this Issue 4/2018

Minds and Machines 4/2018 Go to the issue

Premium Partner