Representing Closed Transformation Paths in Encoded Network Latent Space (bibtex)
by M. Connor and C. Rozell
Abstract:
Deep generative networks have been widely used for learn- ing mappings from a low-dimensional latent space to a high- dimensional data space. In many cases, data transformations are defined by linear paths in this latent space. However, the Euclidean structure of the latent may be a poor match for the underlying latent structure in the data. In this work, we incorporate a generative manifold model into the latent space of an autoencoder in order to learn the low-dimensional manifold structure from the data and adapt the latent space to accommodate this structure. In particular, we focus on applications in which the data has closed transformation paths which ex- tend from a starting point and return to nearly the same point. Through experiments on data with natural closed transformation paths, we show that this model introduces the ability to learn the latent dynamics of complex systems, generate transformation paths, and classify samples that belong on the same transformation path.
Reference:
Representing Closed Transformation Paths in Encoded Network Latent SpaceM. Connor and C. Rozell. In AAAI Conference on Artificial Intelligence (AAAI), February 2020. \textbfSelected for spotlight presentation. (Acceptance rate 20%).
Bibtex Entry:
@InProceedings{connor.19b,
     author = 	 {Connor, M. and Rozell, C.},
     title = 	 {Representing Closed Transformation Paths in Encoded Network Latent Space},
     booktitle =	 {AAAI Conference on Artificial Intelligence (AAAI)},
     year =	 2020,
  	 month = feb,
  address = {New York, NY},
  abstract = {Deep generative networks have been widely used for learn- ing mappings from a low-dimensional latent space to a high- dimensional data space. In many cases, data transformations are defined by linear paths in this latent space. However, the Euclidean structure of the latent may be a poor match for the underlying latent structure in the data. In this work, we incorporate a generative manifold model into the latent space of an autoencoder in order to learn the low-dimensional manifold structure from the data and adapt the latent space to accommodate this structure. In particular, we focus on applications in which the data has closed transformation paths which ex- tend from a starting point and return to nearly the same point. Through experiments on data with natural closed transformation paths, we show that this model introduces the ability to learn the latent dynamics of complex systems, generate transformation paths, and classify samples that belong on the same transformation path.},
 note = {\textbf{Selected for spotlight presentation.} (Acceptance rate 20\%).},
 url = {https://arxiv.org/abs/1912.02644}
  }
Powered by bibtexbrowser