Skip to main content

2020 | OriginalPaper | Buchkapitel

4. Feedforward Neural Networks

verfasst von : Matthew F. Dixon, Igor Halperin, Paul Bilokon

Erschienen in: Machine Learning in Finance

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This chapter provides a more in-depth description of supervised learning, deep learning, and neural networks—presenting the foundational mathematical and statistical learning concepts and explaining how they relate to real-world examples in trading, risk management, and investment management. These applications present challenges for forecasting and model design and are presented as a reoccurring theme throughout the book. This chapter moves towards a more engineering style exposition of neural networks, applying concepts in the previous chapters to elucidate various model design choices.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Fußnoten
1
Note that there is a potential degeneracy in this case; There may exist “flat directions”—hyper-surfaces in the parameter space that have exactly the same loss function.
 
2
There is some redundancy in the construction of the network and around 50 units are needed.
 
3
The parameterized softplus function \(\sigma (x;t)=\frac {1}{t}ln(1+\exp \{tx\})\), with a model parameter t >> 1, converges to the ReLU function in the limit t →.
 
Literatur
Zurück zum Zitat Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., et al. (2016). Tensor flow: A system for large-scale machine learning. In Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation, OSDI’16 (pp. 265–283). Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., et al. (2016). Tensor flow: A system for large-scale machine learning. In Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation, OSDI’16 (pp. 265–283).
Zurück zum Zitat Adams, R., Wallach, H., & Ghahramani, Z. (2010). Learning the structure of deep sparse graphical models. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (pp. 1–8). Adams, R., Wallach, H., & Ghahramani, Z. (2010). Learning the structure of deep sparse graphical models. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (pp. 1–8).
Zurück zum Zitat Andrews, D. (1989). A unified theory of estimation and inference for nonlinear dynamic models a.r. gallant and h. white. Econometric Theory,5(01), 166–171. Andrews, D. (1989). A unified theory of estimation and inference for nonlinear dynamic models a.r. gallant and h. white. Econometric Theory,5(01), 166–171.
Zurück zum Zitat Baillie, R. T., & Kapetanios, G. (2007). Testing for neglected nonlinearity in long-memory models. Journal of Business & Economic Statistics,25(4), 447–461.MathSciNet Baillie, R. T., & Kapetanios, G. (2007). Testing for neglected nonlinearity in long-memory models. Journal of Business & Economic Statistics,25(4), 447–461.MathSciNet
Zurück zum Zitat Barber, D., & Bishop, C. M. (1998). Ensemble learning in Bayesian neural networks. Neural Networks and Machine Learning,168, 215–238.MATH Barber, D., & Bishop, C. M. (1998). Ensemble learning in Bayesian neural networks. Neural Networks and Machine Learning,168, 215–238.MATH
Zurück zum Zitat Bartlett, P., Harvey, N., Liaw, C., & Mehrabian, A. (2017a). Nearly-tight VC-dimension bounds for piecewise linear neural networks. CoRR,abs/1703.02930. Bartlett, P., Harvey, N., Liaw, C., & Mehrabian, A. (2017a). Nearly-tight VC-dimension bounds for piecewise linear neural networks. CoRR,abs/1703.02930.
Zurück zum Zitat Bartlett, P., Harvey, N., Liaw, C., & Mehrabian, A. (2017b). Nearly-tight VC-dimension bounds for piecewise linear neural networks. CoRR,abs/1703.02930. Bartlett, P., Harvey, N., Liaw, C., & Mehrabian, A. (2017b). Nearly-tight VC-dimension bounds for piecewise linear neural networks. CoRR,abs/1703.02930.
Zurück zum Zitat Bengio, Y., Roux, N. L., Vincent, P., Delalleau, O., & Marcotte, P. (2006). Convex neural networks. In Y. Weiss, Schölkopf, B., & Platt, J. C. (Eds.), Advances in neural information processing systems 18 (pp. 123–130). MIT Press. Bengio, Y., Roux, N. L., Vincent, P., Delalleau, O., & Marcotte, P. (2006). Convex neural networks. In Y. Weiss, Schölkopf, B., & Platt, J. C. (Eds.), Advances in neural information processing systems 18 (pp. 123–130). MIT Press.
Zurück zum Zitat Bishop, C. M. (2006). Pattern recognition and machine learning (information science and statistics). Berlin, Heidelberg: Springer-Verlag.MATH Bishop, C. M. (2006). Pattern recognition and machine learning (information science and statistics). Berlin, Heidelberg: Springer-Verlag.MATH
Zurück zum Zitat Blundell, C., Cornebise, J., Kavukcuoglu, K., & Wierstra, D. (2015a, May). Weight uncertainty in neural networks. arXiv:1505.05424 [cs, stat]. Blundell, C., Cornebise, J., Kavukcuoglu, K., & Wierstra, D. (2015a, May). Weight uncertainty in neural networks. arXiv:1505.05424 [cs, stat].
Zurück zum Zitat Blundell, C., Cornebise, J., Kavukcuoglu, K., & Wierstra, D. (2015b). Weight uncertainty in neural networks. arXiv preprint arXiv:1505.05424. Blundell, C., Cornebise, J., Kavukcuoglu, K., & Wierstra, D. (2015b). Weight uncertainty in neural networks. arXiv preprint arXiv:1505.05424.
Zurück zum Zitat Chataigner, Crepe, & Dixon. (2020). Deep local volatility. Chataigner, Crepe, & Dixon. (2020). Deep local volatility.
Zurück zum Zitat Chen, J., Flood, M. D., & Sowers, R. B. (2017). Measuring the unmeasurable: an application of uncertainty quantification to treasury bond portfolios. Quantitative Finance,17(10), 1491–1507.MathSciNetMATH Chen, J., Flood, M. D., & Sowers, R. B. (2017). Measuring the unmeasurable: an application of uncertainty quantification to treasury bond portfolios. Quantitative Finance,17(10), 1491–1507.MathSciNetMATH
Zurück zum Zitat Dean, J., Corrado, G., Monga, R., Chen, K., Devin, M., Mao, M., et al. (2012). Large scale distributed deep networks. In Advances in neural information processing systems (pp. 1223–1231). Dean, J., Corrado, G., Monga, R., Chen, K., Devin, M., Mao, M., et al. (2012). Large scale distributed deep networks. In Advances in neural information processing systems (pp. 1223–1231).
Zurück zum Zitat Dixon, M., Klabjan, D., & Bang, J. H. (2016). Classification-based financial markets prediction using deep neural networks. CoRR,abs/1603.08604. Dixon, M., Klabjan, D., & Bang, J. H. (2016). Classification-based financial markets prediction using deep neural networks. CoRR,abs/1603.08604.
Zurück zum Zitat Feng, G., He, J., & Polson, N. G. (2018, Apr). Deep learning for predicting asset returns. arXiv e-prints, arXiv:1804.09314. Feng, G., He, J., & Polson, N. G. (2018, Apr). Deep learning for predicting asset returns. arXiv e-prints, arXiv:1804.09314.
Zurück zum Zitat Frey, B. J., & Hinton, G. E. (1999). Variational learning in nonlinear Gaussian belief networks. Neural Computation,11(1), 193–213. Frey, B. J., & Hinton, G. E. (1999). Variational learning in nonlinear Gaussian belief networks. Neural Computation,11(1), 193–213.
Zurück zum Zitat Gal, Y. (2015). A theoretically grounded application of dropout in recurrent neural networks. arXiv:1512.05287. Gal, Y. (2015). A theoretically grounded application of dropout in recurrent neural networks. arXiv:1512.05287.
Zurück zum Zitat Gal, Y. (2016). Uncertainty in deep learning. Ph.D. thesis, University of Cambridge. Gal, Y. (2016). Uncertainty in deep learning. Ph.D. thesis, University of Cambridge.
Zurück zum Zitat Gal, Y., & Ghahramani, Z. (2016). Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In international Conference on Machine Learning (pp. 1050–1059). Gal, Y., & Ghahramani, Z. (2016). Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In international Conference on Machine Learning (pp. 1050–1059).
Zurück zum Zitat Gallant, A., & White, H. (1988, July). There exists a neural network that does not make avoidable mistakes. In IEEE 1988 International Conference on Neural Networks (vol.1 ,pp. 657–664). Gallant, A., & White, H. (1988, July). There exists a neural network that does not make avoidable mistakes. In IEEE 1988 International Conference on Neural Networks (vol.1 ,pp. 657–664).
Zurück zum Zitat Graves, A. (2011). Practical variational inference for neural networks. In Advances in Neural Information Processing Systems (pp. 2348–2356). Graves, A. (2011). Practical variational inference for neural networks. In Advances in Neural Information Processing Systems (pp. 2348–2356).
Zurück zum Zitat Hastie, T., Tibshirani, R., & Friedman, J. (2009). The elements of statistical learning: data mining, inference and prediction. Springer.MATH Hastie, T., Tibshirani, R., & Friedman, J. (2009). The elements of statistical learning: data mining, inference and prediction. Springer.MATH
Zurück zum Zitat Heaton, J. B., Polson, N. G., & Witte, J. H. (2017). Deep learning for finance: deep portfolios. Applied Stochastic Models in Business and Industry,33(1), 3–12.MathSciNetMATH Heaton, J. B., Polson, N. G., & Witte, J. H. (2017). Deep learning for finance: deep portfolios. Applied Stochastic Models in Business and Industry,33(1), 3–12.MathSciNetMATH
Zurück zum Zitat Hernández-Lobato, J. M., & Adams, R. (2015). Probabilistic backpropagation for scalable learning of Bayesian neural networks. In International Conference on Machine Learning (pp. 1861–1869). Hernández-Lobato, J. M., & Adams, R. (2015). Probabilistic backpropagation for scalable learning of Bayesian neural networks. In International Conference on Machine Learning (pp. 1861–1869).
Zurück zum Zitat Hinton, G. E., & Sejnowski, T. J. (1983). Optimal perceptual inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 448–453). IEEE New York. Hinton, G. E., & Sejnowski, T. J. (1983). Optimal perceptual inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 448–453). IEEE New York.
Zurück zum Zitat Hinton, G. E., & Van Camp, D. (1993). Keeping the neural networks simple by minimizing the description length of the weights. In Proceedings of the Sixth Annual Conference on Computational Learning Theory (pp. 5–13). ACM. Hinton, G. E., & Van Camp, D. (1993). Keeping the neural networks simple by minimizing the description length of the weights. In Proceedings of the Sixth Annual Conference on Computational Learning Theory (pp. 5–13). ACM.
Zurück zum Zitat Hornik, K., Stinchcombe, M., & White, H. (1989, July). Multilayer feedforward networks are universal approximators. Neural Netw.,2(5), 359–366.MATH Hornik, K., Stinchcombe, M., & White, H. (1989, July). Multilayer feedforward networks are universal approximators. Neural Netw.,2(5), 359–366.MATH
Zurück zum Zitat Horvath, B., Muguruza, A., & Tomas, M. (2019, Jan). Deep learning volatility. arXiv e-prints, arXiv:1901.09647. Horvath, B., Muguruza, A., & Tomas, M. (2019, Jan). Deep learning volatility. arXiv e-prints, arXiv:1901.09647.
Zurück zum Zitat Hutchinson, J. M., Lo, A. W., & Poggio, T. (1994). A nonparametric approach to pricing and hedging derivative securities via learning networks. The Journal of Finance,49(3), 851–889. Hutchinson, J. M., Lo, A. W., & Poggio, T. (1994). A nonparametric approach to pricing and hedging derivative securities via learning networks. The Journal of Finance,49(3), 851–889.
Zurück zum Zitat Kingma, D. P., & Welling, M. (2013). Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114. Kingma, D. P., & Welling, M. (2013). Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114.
Zurück zum Zitat Kuan, C.-M., & White, H. (1994). Artificial neural networks: an econometric perspective. Econometric Reviews,13(1), 1–91.MathSciNetMATH Kuan, C.-M., & White, H. (1994). Artificial neural networks: an econometric perspective. Econometric Reviews,13(1), 1–91.MathSciNetMATH
Zurück zum Zitat Lawrence, N. (2005). Probabilistic non-linear principal component analysis with Gaussian process latent variable models. Journal of Machine Learning Research,6(Nov), 1783–1816.MathSciNetMATH Lawrence, N. (2005). Probabilistic non-linear principal component analysis with Gaussian process latent variable models. Journal of Machine Learning Research,6(Nov), 1783–1816.MathSciNetMATH
Zurück zum Zitat Liang, S., & Srikant, R. (2016). Why deep neural networks? CoRRabs/1610.04161. Liang, S., & Srikant, R. (2016). Why deep neural networks? CoRRabs/1610.04161.
Zurück zum Zitat Lo, A. (1994). Neural networks and other nonparametric techniques in economics and finance. In AIMR Conference Proceedings, Number 9. Lo, A. (1994). Neural networks and other nonparametric techniques in economics and finance. In AIMR Conference Proceedings, Number 9.
Zurück zum Zitat MacKay, D. J. (1992a). A practical Bayesian framework for backpropagation networks. Neural Computation,4(3), 448–472. MacKay, D. J. (1992a). A practical Bayesian framework for backpropagation networks. Neural Computation,4(3), 448–472.
Zurück zum Zitat MacKay, D. J. C. (1992b, May). A practical Bayesian framework for backpropagation networks. Neural Computation,4(3), 448–472. MacKay, D. J. C. (1992b, May). A practical Bayesian framework for backpropagation networks. Neural Computation,4(3), 448–472.
Zurück zum Zitat Martin, C. H., & Mahoney, M. W. (2018). Implicit self-regularization in deep neural networks: Evidence from random matrix theory and implications for learning. CoRRabs/1810.01075. Martin, C. H., & Mahoney, M. W. (2018). Implicit self-regularization in deep neural networks: Evidence from random matrix theory and implications for learning. CoRRabs/1810.01075.
Zurück zum Zitat Mhaskar, H., Liao, Q., & Poggio, T. A. (2016). Learning real and Boolean functions: When is deep better than shallow. CoRRabs/1603.00988. Mhaskar, H., Liao, Q., & Poggio, T. A. (2016). Learning real and Boolean functions: When is deep better than shallow. CoRRabs/1603.00988.
Zurück zum Zitat Mnih, A., & Gregor, K. (2014). Neural variational inference and learning in belief networks. arXiv preprint arXiv:1402.0030. Mnih, A., & Gregor, K. (2014). Neural variational inference and learning in belief networks. arXiv preprint arXiv:1402.0030.
Zurück zum Zitat Montúfar, G., Pascanu, R., Cho, K., & Bengio, Y. (2014, Feb). On the number of linear regions of deep neural networks. arXiv e-prints, arXiv:1402.1869. Montúfar, G., Pascanu, R., Cho, K., & Bengio, Y. (2014, Feb). On the number of linear regions of deep neural networks. arXiv e-prints, arXiv:1402.1869.
Zurück zum Zitat Mullainathan, S., & Spiess, J. (2017). Machine learning: An applied econometric approach. Journal of Economic Perspectives,31(2), 87–106. Mullainathan, S., & Spiess, J. (2017). Machine learning: An applied econometric approach. Journal of Economic Perspectives,31(2), 87–106.
Zurück zum Zitat Neal, R. M. (1990). Learning stochastic feedforward networks, Vol. 64. Technical report, Department of Computer Science, University of Toronto. Neal, R. M. (1990). Learning stochastic feedforward networks, Vol. 64. Technical report, Department of Computer Science, University of Toronto.
Zurück zum Zitat Neal, R. M. (1992). Bayesian training of backpropagation networks by the hybrid Monte Carlo method. Technical report, CRG-TR-92-1, Dept. of Computer Science, University of Toronto. Neal, R. M. (1992). Bayesian training of backpropagation networks by the hybrid Monte Carlo method. Technical report, CRG-TR-92-1, Dept. of Computer Science, University of Toronto.
Zurück zum Zitat Neal, R. M. (2012). Bayesian learning for neural networks, Vol. 118. Springer Science & Business Media. bibtex: aneal2012bayesian. Neal, R. M. (2012). Bayesian learning for neural networks, Vol. 118. Springer Science & Business Media. bibtex: aneal2012bayesian.
Zurück zum Zitat Nesterov, Y. (2013). Introductory lectures on convex optimization: A basic course, Volume 87. Springer Science & Business Media. Nesterov, Y. (2013). Introductory lectures on convex optimization: A basic course, Volume 87. Springer Science & Business Media.
Zurück zum Zitat Poggio, T. (2016). Deep learning: mathematics and neuroscience. A sponsored supplement to sciencebrain-inspired intelligent robotics: The intersection of robotics and neuroscience, pp. 9–12. Poggio, T. (2016). Deep learning: mathematics and neuroscience. A sponsored supplement to sciencebrain-inspired intelligent robotics: The intersection of robotics and neuroscience, pp. 9–12.
Zurück zum Zitat Polson, N., & Rockova, V. (2018, Mar). Posterior concentration for sparse deep learning. arXiv e-prints, arXiv:1803.09138. Polson, N., & Rockova, V. (2018, Mar). Posterior concentration for sparse deep learning. arXiv e-prints, arXiv:1803.09138.
Zurück zum Zitat Polson, N. G., Willard, B. T., & Heidari, M. (2015). A statistical theory of deep learning via proximal splitting. arXiv:1509.06061. Polson, N. G., Willard, B. T., & Heidari, M. (2015). A statistical theory of deep learning via proximal splitting. arXiv:1509.06061.
Zurück zum Zitat Racine, J. (2001). On the nonlinear predictability of stock returns using financial and economic variables. Journal of Business & Economic Statistics,19(3), 380–382.MathSciNet Racine, J. (2001). On the nonlinear predictability of stock returns using financial and economic variables. Journal of Business & Economic Statistics,19(3), 380–382.MathSciNet
Zurück zum Zitat Rezende, D. J., Mohamed, S., & Wierstra, D. (2014). Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082. Rezende, D. J., Mohamed, S., & Wierstra, D. (2014). Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082.
Zurück zum Zitat Ruiz, F. R., Aueb, M. T. R., & Blei, D. (2016). The generalized reparameterization gradient. In Advances in Neural Information Processing Systems (pp. 460–468). Ruiz, F. R., Aueb, M. T. R., & Blei, D. (2016). The generalized reparameterization gradient. In Advances in Neural Information Processing Systems (pp. 460–468).
Zurück zum Zitat Salakhutdinov, R. (2008). Learning and evaluating Boltzmann machines. Tech. Rep., Technical Report UTML TR 2008-002, Department of Computer Science, University of Toronto. Salakhutdinov, R. (2008). Learning and evaluating Boltzmann machines. Tech. Rep., Technical Report UTML TR 2008-002, Department of Computer Science, University of Toronto.
Zurück zum Zitat Salakhutdinov, R., & Hinton, G. (2009). Deep Boltzmann machines. In Artificial Intelligence and Statistics (pp. 448–455). Salakhutdinov, R., & Hinton, G. (2009). Deep Boltzmann machines. In Artificial Intelligence and Statistics (pp. 448–455).
Zurück zum Zitat Saul, L. K., Jaakkola, T., & Jordan, M. I. (1996). Mean field theory for sigmoid belief networks. Journal of Artificial Intelligence Research,4, 61–76.MATH Saul, L. K., Jaakkola, T., & Jordan, M. I. (1996). Mean field theory for sigmoid belief networks. Journal of Artificial Intelligence Research,4, 61–76.MATH
Zurück zum Zitat Sirignano, J., Sadhwani, A., & Giesecke, K. (2016, July). Deep learning for mortgage risk. ArXiv e-prints. Sirignano, J., Sadhwani, A., & Giesecke, K. (2016, July). Deep learning for mortgage risk. ArXiv e-prints.
Zurück zum Zitat Smolensky, P. (1986). Parallel distributed processing: explorations in the microstructure of cognition (Vol. 1. pp. 194–281). Cambridge, MA, USA: MIT Press. Smolensky, P. (1986). Parallel distributed processing: explorations in the microstructure of cognition (Vol. 1. pp. 194–281). Cambridge, MA, USA: MIT Press.
Zurück zum Zitat Srivastava, N., Hinton, G. E., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research,15(1), 1929–1958.MathSciNetMATH Srivastava, N., Hinton, G. E., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research,15(1), 1929–1958.MathSciNetMATH
Zurück zum Zitat Swanson, N. R., & White, H. (1995). A model-selection approach to assessing the information in the term structure using linear models and artificial neural networks. Journal of Business & Economic Statistics,13(3), 265–275. Swanson, N. R., & White, H. (1995). A model-selection approach to assessing the information in the term structure using linear models and artificial neural networks. Journal of Business & Economic Statistics,13(3), 265–275.
Zurück zum Zitat Telgarsky, M. (2016). Benefits of depth in neural networks. CoRRabs/1602.04485. Telgarsky, M. (2016). Benefits of depth in neural networks. CoRRabs/1602.04485.
Zurück zum Zitat Tieleman, T. (2008). Training restricted Boltzmann machines using approximations to the likelihood gradient. In Proceedings of the 25th International Conference on Machine Learning (pp. 1064–1071). ACM. Tieleman, T. (2008). Training restricted Boltzmann machines using approximations to the likelihood gradient. In Proceedings of the 25th International Conference on Machine Learning (pp. 1064–1071). ACM.
Zurück zum Zitat Tishby, N., & Zaslavsky, N. (2015). Deep learning and the information bottleneck principle. CoRRabs/1503.02406. Tishby, N., & Zaslavsky, N. (2015). Deep learning and the information bottleneck principle. CoRRabs/1503.02406.
Zurück zum Zitat Tran, D., Hoffman, M. D., Saurous, R. A., Brevdo, E., Murphy, K., & Blei, D. M. (2017, January). Deep probabilistic programming. arXiv:1701.03757 [cs, stat]. Tran, D., Hoffman, M. D., Saurous, R. A., Brevdo, E., Murphy, K., & Blei, D. M. (2017, January). Deep probabilistic programming. arXiv:1701.03757 [cs, stat].
Zurück zum Zitat Vapnik, V. N. (1998). Statistical learning theory. Wiley-Interscience.MATH Vapnik, V. N. (1998). Statistical learning theory. Wiley-Interscience.MATH
Zurück zum Zitat Welling, M., Rosen-Zvi, M., & Hinton, G. E. (2005). Exponential family harmoniums with an application to information retrieval. In Advances in Neural Information Processing Systems (pp. 1481–1488). Welling, M., Rosen-Zvi, M., & Hinton, G. E. (2005). Exponential family harmoniums with an application to information retrieval. In Advances in Neural Information Processing Systems (pp. 1481–1488).
Zurück zum Zitat Williams, C. K. (1997). Computing with infinite networks. In Advances in Neural Information Processing systems (pp. 295–301). Williams, C. K. (1997). Computing with infinite networks. In Advances in Neural Information Processing systems (pp. 295–301).
Metadaten
Titel
Feedforward Neural Networks
verfasst von
Matthew F. Dixon
Igor Halperin
Paul Bilokon
Copyright-Jahr
2020
DOI
https://doi.org/10.1007/978-3-030-41068-1_4