TY - JOUR
T1 - A new look at state-space models for neural data
AU - Paninski, Liam
AU - Ahmadian, Yashar
AU - Ferreira, Daniel Gil
AU - Koyama, Shinsuke
AU - Rahnama Rad, Kamiar
AU - Vidne, Michael
AU - Vogelstein, Joshua
AU - Wu, Wei
N1 - Funding Information:
Acknowledgements We thank J. Pillow for sharing the data used in Figs. 2 and 5, G. Czanner for sharing the data used in Fig. 6, and B. Babadi and Q. Huys for many helpful discussions. LP is supported by NIH grant R01 EY018003, an NSF CAREER award, and a McKnight Scholar award; YA by a Patterson Trust Postdoctoral Fellowship; DGF by the Gulbenkian PhD Program in Computational Biology, Fundacao para a Ciencia e Tecnologia PhD Grant ref. SFRH / BD / 33202 / 2007; SK by NIH grants R01 MH064537, R01 EB005847 and R01 NS050256; JV by NIDCD DC00109.
PY - 2010/8
Y1 - 2010/8
N2 - State space methods have proven indispensable in neural data analysis. However, common methods for performing inference in state-space models with non-Gaussian observations rely on certain approximations which are not always accurate. Here we review direct optimization methods that avoid these approximations, but that nonetheless retain the computational efficiency of the approximate methods. We discuss a variety of examples, applying these direct optimization techniques to problems in spike train smoothing, stimulus decoding, parameter estimation, and inference of synaptic properties. Along the way, we point out connections to some related standard statistical methods, including spline smoothing and isotonic regression. Finally, we note that the computational methods reviewed here do not in fact depend on the state-space setting at all; instead, the key property we are exploiting involves the bandedness of certain matrices. We close by discussing some applications of this more general point of view, including Markov chain Monte Carlo methods for neural decoding and efficient estimation of spatially-varying firing rates.
AB - State space methods have proven indispensable in neural data analysis. However, common methods for performing inference in state-space models with non-Gaussian observations rely on certain approximations which are not always accurate. Here we review direct optimization methods that avoid these approximations, but that nonetheless retain the computational efficiency of the approximate methods. We discuss a variety of examples, applying these direct optimization techniques to problems in spike train smoothing, stimulus decoding, parameter estimation, and inference of synaptic properties. Along the way, we point out connections to some related standard statistical methods, including spline smoothing and isotonic regression. Finally, we note that the computational methods reviewed here do not in fact depend on the state-space setting at all; instead, the key property we are exploiting involves the bandedness of certain matrices. We close by discussing some applications of this more general point of view, including Markov chain Monte Carlo methods for neural decoding and efficient estimation of spatially-varying firing rates.
KW - Hidden Markov model
KW - Neural coding
KW - State-space models
KW - Tridiagonal matrix
UR - http://www.scopus.com/inward/record.url?scp=77956895434&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=77956895434&partnerID=8YFLogxK
U2 - 10.1007/s10827-009-0179-x
DO - 10.1007/s10827-009-0179-x
M3 - Review article
C2 - 19649698
AN - SCOPUS:77956895434
SN - 0929-5313
VL - 29
SP - 107
EP - 126
JO - Journal of Computational Neuroscience
JF - Journal of Computational Neuroscience
IS - 1-2
ER -