Convergence and Rate Analysis of Neural Networks for Sparse Approximation (bibtex)
by A. Balavoine, J. Romberg and C.J. Rozell
Abstract:
We present an analysis of the Locally Competitive Algorithm (LCA), a Hopfield-style neural network that efficiently solves sparse approximation problems (e.g., approximating a vector from a dictionary using just a few non-zero coefficients). This class of problems plays a significant role in both theories of neural coding and applications in signal processing. However, the LCA lacks analysis of its convergence properties and previous results on neural networks for nonsmooth optimization do not apply to the specifics of the LCA architecture. We show that the LCA has desirable convergence properties, such as stability and global convergence to the optimum of the objective function when it is unique. Under some mild conditions, the support of the solution is also proven to be reached in finite time. Furthermore, some restrictions on the problem specifics allow us to characterize the convergence rate of the system by showing that the LCA converges exponentially fast with an analytically bounded convergence rate. We support our analysis with several illustrative simulations.
Reference:
Convergence and Rate Analysis of Neural Networks for Sparse ApproximationA. Balavoine, J. Romberg and C.J. Rozell. IEEE Transactions on Neural Networks and Learning Systems, 23(9), pp. 1377–1389, September 2012.
Bibtex Entry:
@Article{balavoine.11b,
  author = 	 {Balavoine, A. and Romberg, J. and Rozell, C.J.},
  title = 	 {Convergence and Rate Analysis of Neural Networks for Sparse Approximation},
year = 2012,
month = sep,
volume = {23},
number = {9},
pages = {1377--1389},
  abstract =     {We present an analysis of the Locally Competitive Algorithm (LCA), a Hopfield-style neural network that efficiently solves sparse approximation problems (e.g., approximating a vector from a dictionary using just a few non-zero coefficients).  This class of problems plays a significant role in both theories of neural coding and applications in signal processing. However, the LCA lacks analysis of its convergence properties and previous results on neural networks for nonsmooth optimization do not apply to the specifics of the LCA architecture.
We show that the LCA has desirable convergence properties, such as stability and global convergence to the optimum of the objective function when it is unique. Under some mild conditions, the support of the solution is also proven to be reached in finite time.  Furthermore,  some restrictions on the problem specifics allow us to characterize the convergence rate of the system by showing that the LCA converges exponentially fast with an analytically bounded convergence rate.  We support our analysis with several illustrative simulations.},
url =          {http://siplab.gatech.edu/pubs/balavoineTNNLS2012.pdf},
journal = {IEEE Transactions on Neural Networks and Learning Systems}
}
Powered by bibtexbrowser