A new look at state-space models for neural data

Liam Paninski, Yashar Ahmadian, Daniel Gil Ferreira, Shinsuke Koyama, Kamiar Rahnama Rad, Michael Vidne, Joshua Vogelstein, Wei Wu

Research output: Contribution to journalReview articlepeer-review

Abstract

State space methods have proven indispensable in neural data analysis. However, common methods for performing inference in state-space models with non-Gaussian observations rely on certain approximations which are not always accurate. Here we review direct optimization methods that avoid these approximations, but that nonetheless retain the computational efficiency of the approximate methods. We discuss a variety of examples, applying these direct optimization techniques to problems in spike train smoothing, stimulus decoding, parameter estimation, and inference of synaptic properties. Along the way, we point out connections to some related standard statistical methods, including spline smoothing and isotonic regression. Finally, we note that the computational methods reviewed here do not in fact depend on the state-space setting at all; instead, the key property we are exploiting involves the bandedness of certain matrices. We close by discussing some applications of this more general point of view, including Markov chain Monte Carlo methods for neural decoding and efficient estimation of spatially-varying firing rates.

Original languageEnglish (US)
Pages (from-to)107-126
Number of pages20
JournalJournal of Computational Neuroscience
Volume29
Issue number1-2
DOIs
StatePublished - Aug 2010

Keywords

  • Hidden Markov model
  • Neural coding
  • State-space models
  • Tridiagonal matrix

ASJC Scopus subject areas

  • Sensory Systems
  • Cognitive Neuroscience
  • Cellular and Molecular Neuroscience

Fingerprint

Dive into the research topics of 'A new look at state-space models for neural data'. Together they form a unique fingerprint.

Cite this