Short Term Memory Capacity in Networks via the Restricted Isometry Property (bibtex)
by A.S. Charles, H.L. Yap and C.J. Rozell
Abstract:
Cortical networks are hypothesized to rely on transient activity to support short term memory (STM). In this paper we study the capacity of randomly connected recurrent linear networks for performing STM when the input signals are approximately sparse in some basis. We leverage results from compressed sensing to provide rigorous non-asymptotic recovery guarantees, quantifying the impact of the input sparsity level, the input sparsity basis, and the network characteristics on the system capacity. Our analysis demonstrates that network memory capacities can scale superlinearly with the number of nodes, and in some situations can achieve STM capacities that are much larger than the network size. We provide perfect recovery guarantees for finite sequences and recovery bounds for infinite sequences. The latter analysis predicts that network STM systems may have an optimal recovery length that balances errors due to omission and recall mistakes. Furthermore, we show that the conditions yielding optimal STM capacity can be embodied in several network topologies, including networks with sparse or dense connectivities.
Reference:
Short Term Memory Capacity in Networks via the Restricted Isometry PropertyA.S. Charles, H.L. Yap and C.J. Rozell. Neural Computation, 26(6), pp. 1198–1235, June 2014.
Bibtex Entry:
@Article{charles.12g,
  author = 	 {Charles, A.S. and Yap, H.L. and Rozell, C.J.},
  title = 	 {Short Term Memory Capacity in Networks via the Restricted Isometry Property},
  abstract =     {Cortical networks are hypothesized to rely on transient activity to support short term memory (STM).  In this paper we study the capacity of randomly connected recurrent linear networks for performing STM when the  input signals are approximately sparse in some basis.  We leverage results from compressed sensing to provide rigorous non-asymptotic recovery guarantees,  quantifying the  impact of the input sparsity level, the input sparsity basis, and the network characteristics on the system capacity.
Our analysis demonstrates that network memory capacities can scale superlinearly with the number of nodes, and in some situations can achieve STM capacities that are much larger than the network size.  We provide perfect recovery guarantees for finite sequences and recovery bounds for infinite sequences.  The latter analysis predicts that network STM systems may have an optimal recovery length that balances errors due to omission and recall mistakes. 
Furthermore, we show that the conditions yielding optimal STM capacity can be embodied in several network topologies, including networks with sparse or dense connectivities.},
year = 2014,
month = jun,
volume = 26,
number = 6,
pages = {1198--1235},
journal = {Neural Computation},
url = {http://arxiv.org/pdf/1307.7970v4.pdf}
}
Powered by bibtexbrowser