Skip to main content
Erschienen in: Journal of Computational Neuroscience 1-2/2010

01.08.2010

Efficient computation of the maximum a posteriori path and parameter estimation in integrate-and-fire and more general state-space models

verfasst von: Shinsuke Koyama, Liam Paninski

Erschienen in: Journal of Computational Neuroscience | Ausgabe 1-2/2010

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

A number of important data analysis problems in neuroscience can be solved using state-space models. In this article, we describe fast methods for computing the exact maximum a posteriori (MAP) path of the hidden state variable in these models, given spike train observations. If the state transition density is log-concave and the observation model satisfies certain standard assumptions, then the optimization problem is strictly concave and can be solved rapidly with Newton–Raphson methods, because the Hessian of the loglikelihood is block tridiagonal. We can further exploit this block-tridiagonal structure to develop efficient parameter estimation methods for these models. We describe applications of this approach to neural decoding problems, with a focus on the classic integrate-and-fire model as a key example.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Literatur
Zurück zum Zitat Abarbanel, H., Creveling, D., & Jeanne, J. (2008). Estimation of parameters in nonlinear systems using balanced synchronization. Physical Review. D., 77, 016,208. Abarbanel, H., Creveling, D., & Jeanne, J. (2008). Estimation of parameters in nonlinear systems using balanced synchronization. Physical Review. D., 77, 016,208.
Zurück zum Zitat Ahmadian, Y., Pillow, J., & Paninski, L. (2009). Efficient Markov chain Monte Carlo methods for decoding population spike trains. Neural Computation (in press). Ahmadian, Y., Pillow, J., & Paninski, L. (2009). Efficient Markov chain Monte Carlo methods for decoding population spike trains. Neural Computation (in press).
Zurück zum Zitat Ahmed, N. U. (1998). Linear and nonlinear filtering for scientists and engineers. Singapore: World Scientific. Ahmed, N. U. (1998). Linear and nonlinear filtering for scientists and engineers. Singapore: World Scientific.
Zurück zum Zitat Asif, A., & Moura, J. (2005). Block matrices with l-block banded inverse: Inversion algorithms. IEEE Transactions on Signal Processing, 53, 630–642.CrossRef Asif, A., & Moura, J. (2005). Block matrices with l-block banded inverse: Inversion algorithms. IEEE Transactions on Signal Processing, 53, 630–642.CrossRef
Zurück zum Zitat Badel, L., Richardson, M., & Gerstner, W. (2005). Dependence of the spike-triggered average voltage on membrane response properties. Neurocomputing, 69, 1062–1065.CrossRef Badel, L., Richardson, M., & Gerstner, W. (2005). Dependence of the spike-triggered average voltage on membrane response properties. Neurocomputing, 69, 1062–1065.CrossRef
Zurück zum Zitat Bell, B. M. (1994). The iterated Kalman smoother as Gauss-Newton method. SIAM Journal on Optimization, 4, 626–636.CrossRef Bell, B. M. (1994). The iterated Kalman smoother as Gauss-Newton method. SIAM Journal on Optimization, 4, 626–636.CrossRef
Zurück zum Zitat Boyd, S., & Vandenberghe, L. (2004). Convex optimization. Cambridge: Cambridge University Press. Boyd, S., & Vandenberghe, L. (2004). Convex optimization. Cambridge: Cambridge University Press.
Zurück zum Zitat Brockwell, A. E., Rojas, A. L., & Kass, R. E. (2004). Recursive Bayesian decoding of motor cortical signals by particle filtering. Journal of Neurophysiology, 91, 1899–1907.CrossRefPubMed Brockwell, A. E., Rojas, A. L., & Kass, R. E. (2004). Recursive Bayesian decoding of motor cortical signals by particle filtering. Journal of Neurophysiology, 91, 1899–1907.CrossRefPubMed
Zurück zum Zitat Brown, E. N., Frank, L. M., Tang, D., Quirk, M. C., & Wilson, M. A. (1998). A statistical paradigm for neural spike train decoding applied to position prediction from ensemble firing patterns of rat hippocampal place cells. Journal of Neuroscience, 18, 7411–7425.PubMed Brown, E. N., Frank, L. M., Tang, D., Quirk, M. C., & Wilson, M. A. (1998). A statistical paradigm for neural spike train decoding applied to position prediction from ensemble firing patterns of rat hippocampal place cells. Journal of Neuroscience, 18, 7411–7425.PubMed
Zurück zum Zitat Davis, R. A., & Rodriguez-Yam, G. (2005). Estimation for state-space models based on a likelihood approximation. Statistica Sinica, 15, 381–406. Davis, R. A., & Rodriguez-Yam, G. (2005). Estimation for state-space models based on a likelihood approximation. Statistica Sinica, 15, 381–406.
Zurück zum Zitat Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society. Series B, 79, 1–38. Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society. Series B, 79, 1–38.
Zurück zum Zitat Eden, U. T., Frank, L. M., Barbieri, R., Solo, V., & Brown, E. N. (2004). Dynamic analyses of neural encoding by point process adaptive filtering. Neural Computation, 16, 971–998.CrossRefPubMed Eden, U. T., Frank, L. M., Barbieri, R., Solo, V., & Brown, E. N. (2004). Dynamic analyses of neural encoding by point process adaptive filtering. Neural Computation, 16, 971–998.CrossRefPubMed
Zurück zum Zitat Fahrmeir, L., & Kaufmann, H. (1991). On Kalman filtering, posterior mode estimation and fisher scoring in dynamic exponential family regression. Metrika, 38, 37–60.CrossRef Fahrmeir, L., & Kaufmann, H. (1991). On Kalman filtering, posterior mode estimation and fisher scoring in dynamic exponential family regression. Metrika, 38, 37–60.CrossRef
Zurück zum Zitat Fahrmeir, L., & Tutz, G. (1994). Multivariate statistical modelling based on generalized linear models. New York: Springer. Fahrmeir, L., & Tutz, G. (1994). Multivariate statistical modelling based on generalized linear models. New York: Springer.
Zurück zum Zitat Heskes, T., & Zoeter, O. (2002). Expectation propagation for approximate inference in dynamic Bayesian networks. In A. Darwiche & N. Friedman (Eds.), Uncertainty in artificial intelligence: Proceedings of the eighteenth conference (UAI-2002) (pp. 216–233). San Francisco: Morgan Kaufmann. Heskes, T., & Zoeter, O. (2002). Expectation propagation for approximate inference in dynamic Bayesian networks. In A. Darwiche & N. Friedman (Eds.), Uncertainty in artificial intelligence: Proceedings of the eighteenth conference (UAI-2002) (pp. 216–233). San Francisco: Morgan Kaufmann.
Zurück zum Zitat Huys, Q., Ahrens, M., & Paninski, L. (2006). Efficient estimation of detailed single-neuron models. Journal of Neurophysiology, 96, 872–890.CrossRefPubMed Huys, Q., Ahrens, M., & Paninski, L. (2006). Efficient estimation of detailed single-neuron models. Journal of Neurophysiology, 96, 872–890.CrossRefPubMed
Zurück zum Zitat Izhikevich, E. M. (2007). Dynamical systems in neuroscience: The geometry of excitability and bursting. Cambridge: MIT. Izhikevich, E. M. (2007). Dynamical systems in neuroscience: The geometry of excitability and bursting. Cambridge: MIT.
Zurück zum Zitat Jungbacker, B., & Koopman, S. J. (2007). Monte Carlo estimation for nonlinear non-Gaussian state-space models. Biometrika, 94, 827–839.CrossRef Jungbacker, B., & Koopman, S. J. (2007). Monte Carlo estimation for nonlinear non-Gaussian state-space models. Biometrika, 94, 827–839.CrossRef
Zurück zum Zitat Koyama, S., Shimokawa, T., & Shinomoto, S. (2007). Phase transitions in the estimation of event rate: A path integral analysis. Journal of Physics. A, Mathematical and General, 40, F383–F390.CrossRef Koyama, S., Shimokawa, T., & Shinomoto, S. (2007). Phase transitions in the estimation of event rate: A path integral analysis. Journal of Physics. A, Mathematical and General, 40, F383–F390.CrossRef
Zurück zum Zitat Koyama, S., & Shinomoto, S. (2005). Empirical Bayes interpretations for random point events. Journal of Physics. A, Mathematical and General, 38, L531–L537.CrossRef Koyama, S., & Shinomoto, S. (2005). Empirical Bayes interpretations for random point events. Journal of Physics. A, Mathematical and General, 38, L531–L537.CrossRef
Zurück zum Zitat Minka, T. (2001). Expectation propagation for approximate Bayesian inference. Uncertainty in Artificial intelligence, 17. Minka, T. (2001). Expectation propagation for approximate Bayesian inference. Uncertainty in Artificial intelligence, 17.
Zurück zum Zitat Moehlis, J., Shea-Brown, E., & Rabitz, H. (2006). Optimal inputs for phase models of spiking neurons. ASME Journal of Computational and Nonlinear Dynamics, 1, 358–367.CrossRef Moehlis, J., Shea-Brown, E., & Rabitz, H. (2006). Optimal inputs for phase models of spiking neurons. ASME Journal of Computational and Nonlinear Dynamics, 1, 358–367.CrossRef
Zurück zum Zitat Olsson, R. K., Petersen, K. B., & Lehn-Schioler, T. (2007). State-space models: From the EM algorithm to a gradient approach. Neural Computation, 19, 1097–1111.CrossRef Olsson, R. K., Petersen, K. B., & Lehn-Schioler, T. (2007). State-space models: From the EM algorithm to a gradient approach. Neural Computation, 19, 1097–1111.CrossRef
Zurück zum Zitat Paninski, L. (2004). Maximum likelihood estimation of cascade point-process neural encoding models. Network: Computation in Neural Systems, 15, 243–262.CrossRef Paninski, L. (2004). Maximum likelihood estimation of cascade point-process neural encoding models. Network: Computation in Neural Systems, 15, 243–262.CrossRef
Zurück zum Zitat Paninski, L. (2005). Log-concavity results on Gaussian process methods for supervised and unsupervised learning. Advances in Neural Information Processing Systems, 17, 1025–1032. Paninski, L. (2005). Log-concavity results on Gaussian process methods for supervised and unsupervised learning. Advances in Neural Information Processing Systems, 17, 1025–1032.
Zurück zum Zitat Paninski, L. (2006a). The most likely voltage path and large deviations approximations for integrate-and-fire neurons. Journal of Computational Neuroscience, 21, 71–87.CrossRefPubMed Paninski, L. (2006a). The most likely voltage path and large deviations approximations for integrate-and-fire neurons. Journal of Computational Neuroscience, 21, 71–87.CrossRefPubMed
Zurück zum Zitat Paninski, L. (2006b). The spike-triggered average of the integrate-and-fire cell driven by Gaussian white noise. Neural Computation, 18, 2592–2616.CrossRefPubMed Paninski, L. (2006b). The spike-triggered average of the integrate-and-fire cell driven by Gaussian white noise. Neural Computation, 18, 2592–2616.CrossRefPubMed
Zurück zum Zitat Paninski, L., Pillow, J., & Simoncelli, E. (2004). Maximum likelihood estimation of a stochastic integrate-and-fire neural model. Neural Computation, 16, 2533–2561.CrossRefPubMed Paninski, L., Pillow, J., & Simoncelli, E. (2004). Maximum likelihood estimation of a stochastic integrate-and-fire neural model. Neural Computation, 16, 2533–2561.CrossRefPubMed
Zurück zum Zitat Paninski, L., Brown, E. N., Iyengar, S., & Kass, R. E. (2008). Stochastic methods in neuroscience, chap. Statistical analysis of neuronal data via integrate-and-fire models. Oxford: Oxford University Press. Paninski, L., Brown, E. N., Iyengar, S., & Kass, R. E. (2008). Stochastic methods in neuroscience, chap. Statistical analysis of neuronal data via integrate-and-fire models. Oxford: Oxford University Press.
Zurück zum Zitat Pillow, J., Ahmadian, Y., & Paninski, L. (2009). Model-based decoding, information estimation, and change-point detection in multi-neuron spike trains. Neural Computation (in press). Pillow, J., Ahmadian, Y., & Paninski, L. (2009). Model-based decoding, information estimation, and change-point detection in multi-neuron spike trains. Neural Computation (in press).
Zurück zum Zitat Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. (1992). Numerical recipes in C. Cambridge: Cambridge University Press. Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. (1992). Numerical recipes in C. Cambridge: Cambridge University Press.
Zurück zum Zitat Rabiner, L. (1989). A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77, 257–286.CrossRef Rabiner, L. (1989). A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77, 257–286.CrossRef
Zurück zum Zitat Rasmussen, C. E., & Williams, C. K. I. (2006). Gaussian processes for machine learning. Cambridge: MIT. Rasmussen, C. E., & Williams, C. K. I. (2006). Gaussian processes for machine learning. Cambridge: MIT.
Zurück zum Zitat Roweis, S., & Ghahramani, Z. (1999). A unifying review of linear Gaussian models. Neural Computation, 11, 305–345.CrossRefPubMed Roweis, S., & Ghahramani, Z. (1999). A unifying review of linear Gaussian models. Neural Computation, 11, 305–345.CrossRefPubMed
Zurück zum Zitat Rybicki, G., & Hummer, D. (1991). An accelerated lambda iteration method for multilevel radiative transfer, appendix b: Fast solution for the diagonal elements of the inverse of a tridiagonal matrix. Astronomy and Astrophysics, 245, 171. Rybicki, G., & Hummer, D. (1991). An accelerated lambda iteration method for multilevel radiative transfer, appendix b: Fast solution for the diagonal elements of the inverse of a tridiagonal matrix. Astronomy and Astrophysics, 245, 171.
Zurück zum Zitat Salakhutdinov, R., Roweis, S. T., & Ghahramani, Z. (2003). Optimization with EM and expectation-conjugate-gradient. International Conference on Machine Learning, 20, 672–679. Salakhutdinov, R., Roweis, S. T., & Ghahramani, Z. (2003). Optimization with EM and expectation-conjugate-gradient. International Conference on Machine Learning, 20, 672–679.
Zurück zum Zitat Smith, A. C., & Brown, E. N. (2003). Estimating a state-space model from point process observations. Neural Computation, 15, 965–991.CrossRefPubMed Smith, A. C., & Brown, E. N. (2003). Estimating a state-space model from point process observations. Neural Computation, 15, 965–991.CrossRefPubMed
Zurück zum Zitat Snyder, D. L. (1975). Random point processes. New York: Wiley. Snyder, D. L. (1975). Random point processes. New York: Wiley.
Zurück zum Zitat Tierney, L., Kass, R. E., & Kadane, J. B. (1989). Fully exponential Laplace approximation to posterior expectations and variances. Journal of the American Statistical Association, 84, 710–716.CrossRef Tierney, L., Kass, R. E., & Kadane, J. B. (1989). Fully exponential Laplace approximation to posterior expectations and variances. Journal of the American Statistical Association, 84, 710–716.CrossRef
Zurück zum Zitat West, M., Harrison, J. P., & Migon, H. S. (1985). Dynamic generalized linear models and Bayesian forcasting. Journal of the American Statistical Association, 80, 73–83.CrossRef West, M., Harrison, J. P., & Migon, H. S. (1985). Dynamic generalized linear models and Bayesian forcasting. Journal of the American Statistical Association, 80, 73–83.CrossRef
Zurück zum Zitat Yu, B. M., Shenoy, K. V., & Sahani, M. (2006). Expectation propagation for inference in non-linear dynamical models with Poisson observations. In Proceedings of the nonlinear statistical signal processing workshop. Piscataway: IEEE. Yu, B. M., Shenoy, K. V., & Sahani, M. (2006). Expectation propagation for inference in non-linear dynamical models with Poisson observations. In Proceedings of the nonlinear statistical signal processing workshop. Piscataway: IEEE.
Metadaten
Titel
Efficient computation of the maximum a posteriori path and parameter estimation in integrate-and-fire and more general state-space models
verfasst von
Shinsuke Koyama
Liam Paninski
Publikationsdatum
01.08.2010
Verlag
Springer US
Erschienen in
Journal of Computational Neuroscience / Ausgabe 1-2/2010
Print ISSN: 0929-5313
Elektronische ISSN: 1573-6873
DOI
https://doi.org/10.1007/s10827-009-0150-x

Weitere Artikel der Ausgabe 1-2/2010

Journal of Computational Neuroscience 1-2/2010 Zur Ausgabe

Premium Partner