Abstract
The sparse coding hypothesis has generated significant interest in the computational and theoretical neuroscience communities, but there remain open questions about the exact quantitative form of the sparsity penalty and the implementation of such a coding rule in neurally plausible architectures. The main contribution of this work is to show that a wide variety of sparsity-based probabilistic inference problems proposed in the signal processing and statistics literatures can be implemented exactly in the common network architecture known as the locally competitive algorithm (LCA). Among the cost functions we examine are approximate ℓp norms (0 ≤ P ≤ 2), modified ℓp-norms, block-ℓ1 orms, and reweighted algorithms. Of particular interest is that we show significantly increased performance in reweighted ℓ1 algorithms by inferring all parameters jointly in a dynamical system rather than using an iterative approach native to digital computational architectures.
Original language | English (US) |
---|---|
Pages (from-to) | 3317-3339 |
Number of pages | 23 |
Journal | Neural Computation |
Volume | 24 |
Issue number | 12 |
DOIs | |
State | Published - 2012 |
Externally published | Yes |
ASJC Scopus subject areas
- Arts and Humanities (miscellaneous)
- Cognitive Neuroscience