Convergence of a Neural Network for Sparse Approximation using the Nonsmooth Łojasiewicz Inequality (bibtex)
by , and
Abstract:
Sparse approximation is an optimization program that produces state-of-the-art results in many applications in signal processing and engineering. To deploy this approach in real-time, it is necessary to develop faster solvers than are currently available in digital. The Locally Competitive Algorithm (LCA) is a dynamical system designed to solve the class of sparse approximation problems in continuous time. But before implementing this network in analog VLSI, it is essential to provide performance guarantees. This paper presents new results on the convergence of the LCA neural network. Using recently-developed methods that make use of the Lojasiewicz inequality for nonsmooth functions, we prove that the output and state trajectories converge to a single fixed point. This improves on previous results by guaranteeing convergence to a singleton even when the optimization program has infinitely many and non-isolated solution points.
Reference:
Convergence of a Neural Network for Sparse Approximation using the Nonsmooth Łojasiewicz InequalityA. Balavoine, C.J. Rozell and J.K. Romberg. In Proceedings of the International Joint Conference on Neural Networks, August 2013.
Bibtex Entry:
@InProceedings{balavoine.13,
  author = 	 {Balavoine, A. and Rozell, C.J. and Romberg, J.K.},
  title = 	 {Convergence of a Neural Network for Sparse Approximation using the Nonsmooth {{\L}ojasiewicz} Inequality},
  booktitle =	 {{Proceedings of the International Joint Conference on Neural Networks}},
  year =	 2013,
  month = {August},
  address =	 {Dallas, TX},
abstract = {Sparse approximation is an optimization program that produces state-of-the-art
results in many applications in signal processing and engineering. To deploy
this approach in real-time, it is necessary to develop faster solvers than are
currently available in digital. The Locally Competitive Algorithm (LCA) is a
dynamical system designed to solve the class of sparse approximation problems in
continuous time. But before implementing this network in analog VLSI, it is
essential to provide performance guarantees. This paper presents new results on
the convergence of the LCA neural network. Using recently-developed methods that
make use of the Lojasiewicz inequality for nonsmooth functions, we prove that
the output and state trajectories converge to a single fixed point. This
improves on previous results by guaranteeing convergence to a singleton even
when the optimization program has infinitely many and non-isolated solution points.},
url = {http://siplab.gatech.edu/pubs/balavoineIJCNN2013.pdf}
}
Powered by bibtexbrowser