by S. Shapero, M. Zhu, P. Hasler and C.J. Rozell
Abstract:
Sparse approximation is a hypothesized coding strategy where a population of sensory neurons (e.g., V1) encodes a stimulus using as few active neurons as possible. We present the Spiking LCA, a Spiking Neural Network (SNN) of integrate and fire neurons that calculate sparse approximations. We demonstrate that the firing intensity of the Spiking LCA is formally equivalent to the locally competitive algorithm (LCA), an analog dynamical system that converges on a sparse approximation in a finite time. We further show that the spike rate is an unbiased estimate of that intensity, and provide limits on the maximum variance of the estimation error. We simulate in NEURON a network of 128 neurons that encode 8x8 pixel image patches to demonstrate that the network converges to nearly optimal encodings within 20ms of biological time. We also show that when using more biophysically realistic parameters in the neurons, the gain function encourages additional L0-norm sparsity in the encoding, relative both to ideal neurons and digital solvers.
Reference:
Optimal Sparse Approximation With Integrate and Fire NeuronsS. Shapero, M. Zhu, P. Hasler and C.J. Rozell. International Journal of Neural Systems, 24(05), pp. 1440001, August 2014.
Bibtex Entry:
@Article{shapero.13,
author = {Shapero, S. and Zhu, M. and Hasler, P. and Rozell, C.J.},
title = {Optimal Sparse Approximation With Integrate and Fire Neurons},
abstract = {Sparse approximation is a hypothesized coding strategy where a population of sensory neurons (e.g., V1) encodes a stimulus using as few active neurons as possible. We present the Spiking LCA, a Spiking Neural Network (SNN) of integrate and fire neurons that calculate sparse approximations. We demonstrate that the firing intensity of the Spiking LCA is formally equivalent to the locally competitive algorithm (LCA), an analog dynamical system that converges on a sparse approximation in a finite time. We further show that the spike rate is an unbiased estimate of that intensity, and provide limits on the maximum variance of the estimation error. We simulate in NEURON a network of 128 neurons that encode 8x8 pixel image patches to demonstrate that the network converges to nearly optimal encodings within 20ms of biological time. We also show that when using more biophysically realistic parameters in the neurons, the gain function encourages additional L0-norm sparsity in the encoding, relative both to ideal neurons and digital solvers.},
year = 2014,
journal = {International Journal of Neural Systems},
volume = {24},
number = {05},
pages = {1440001},
month = aug,
URL = {http://www.worldscientific.com/doi/abs/10.1142/S0129065714400012}
}