A Common Network Architecture Efficiently Implements a Variety of Sparsity-based Inference Problems (bibtex)
by A.S. Charles, P. Garrigues and C.J. Rozell
Abstract:
The sparse coding hypothesis has generated significant interest in the computational and theoretical neuroscience communities, but there remain open questions about the exact quantitative form of the sparsity penalty and the implementation of such a coding rule in neurally plausible architectures. The main contribution of this work is to show that a wide variety of sparsity-based probabilistic inference problems proposed in the signal processing and statistics literatures can be implemented exactly in the common network architecture known as the Locally Competitive Algorithm (LCA). Among the cost functions we examine are approximate lp norms (0<= p <= 2), modified lp-norms, block-l1 norms, and re-weighted algorithms. Of particular interest is that we show significantly increased performance in re-weighted l1 algorithms by inferring all parameters jointly in a dynamical system rather than using an iterative approach native to digital computational architectures.
Reference:
A Common Network Architecture Efficiently Implements a Variety of Sparsity-based Inference ProblemsA.S. Charles, P. Garrigues and C.J. Rozell. Neural Computation, 24(12), pp. 3317–3339, December 2012.
Bibtex Entry:
@Article{charles.12e,
  author = 	 {Charles, A.S. and Garrigues, P. and Rozell, C.J.},
  title = 	 {A Common Network Architecture Efficiently Implements a Variety of Sparsity-based Inference Problems},
  abstract =     {
The sparse coding hypothesis has generated significant interest in the computational and theoretical neuroscience communities, but there remain open questions about the exact quantitative form of the sparsity penalty and the implementation of such a coding rule in neurally plausible architectures.  The main contribution of this work is to show that a wide variety of sparsity-based probabilistic inference problems proposed in the signal processing and statistics literatures can be implemented exactly in the common network architecture known as the Locally Competitive Algorithm (LCA).  Among the cost functions we examine  are approximate lp norms (0<= p <= 2), modified lp-norms, block-l1 norms, and re-weighted algorithms.  Of particular interest is that we show significantly increased performance in re-weighted l1 algorithms by inferring all parameters jointly in a dynamical system rather than using an iterative approach native to digital computational architectures.},
year = 2012,
month = dec,
volume = {24},
number = {12},
pages = {3317--3339},
journal = {Neural Computation},
url = {http://siplab.gatech.edu/pubs/charlesNECO2012.pdf}
}
Powered by bibtexbrowser