Configurable Hardware Integrate and Fire Neurons for Sparse Approximation (bibtex)
by S. Shapero, C.J. Rozell and P. Hasler
Abstract:
Sparse approximation is an important optimization problem in signal and image processing applications. A Hopfield-Network-like system of integrate and fire (IF) neurons is proposed as a solution, using the Locally Competitive Algorithm (LCA) to solve an overcomplete L1 sparse approximation problem. A scalable system architecture is described, including IF neurons with a non- linear firing function, and current-based synapses to provide linear computation. A network of 18 neurons with 12 inputs is implemented on the RASP 2.9v chip, a Field Programmable Analog Array (FPAA) with directly programmable floating gate elements. Said system uses over 1400 floating gates, the largest system programmed on a FPAA to date. The circuit successfully reproduced the outputs of a digital optimization program, converging to within 4.8% RMS, and an objective cost only 1.7% higher on average. The active circuit consumed 559 microamps of current at 2.4V, and converges on solutions in 25$ microseconds, with measurement of the converged spike rate taking an additional 1ms. Extrapolating the scaling trends to a N=1000 node system, the Analog LCA compares favorably with State-of-the-Art digital solutions, and analog solutions using a non-spiking approach.
Reference:
Configurable Hardware Integrate and Fire Neurons for Sparse ApproximationS. Shapero, C.J. Rozell and P. Hasler. Neural Networks, vol. 45, pp. 134–143, September 2013. Special issue on Neuromorphic Engineering: from Neural Systems to Brain-Like Engineered Systems.
Bibtex Entry:
@Article{shapero.12,
  author = 	 {Shapero, S. and Rozell, C.J. and Hasler, P.},
  title = 	 {Configurable Hardware Integrate and Fire Neurons for Sparse Approximation},
  abstract =     {Sparse approximation is an important optimization problem in signal and image processing 
applications. A Hopfield-Network-like system of integrate and fire (IF) neurons is proposed as a 
solution, using the Locally Competitive Algorithm (LCA) to solve an overcomplete L1 sparse 
approximation problem. A scalable system architecture is described, including IF neurons with a non-
linear firing function, and current-based synapses to provide linear computation. A network of 18 
neurons with 12 inputs is implemented on the RASP 2.9v chip, a Field Programmable Analog Array 
(FPAA) with directly programmable floating gate elements. Said system uses over 1400 floating gates, 
the largest system programmed on a FPAA to date. The circuit successfully reproduced the outputs of a 
digital optimization program, converging to within 4.8% RMS, and an objective cost only 1.7% higher 
on average. The active circuit consumed 559 microamps of current at 2.4V, and converges on solutions in 25$ microseconds, 
with measurement of the converged spike rate taking an additional 1ms. Extrapolating the scaling 
trends to a N=1000 node system, the Analog LCA compares favorably with State-of-the-Art digital 
solutions, and analog solutions using a non-spiking approach.},
year = 2013,
volume = 45,
month = sep,
pages = {134--143},
journal = {Neural Networks},
note = {Special issue on Neuromorphic Engineering: from Neural Systems to Brain-Like Engineered
Systems.},
url = {http://www.sciencedirect.com/science/article/pii/S089360801300097X}
}
Powered by bibtexbrowser