Learning Internal Representations of 3D Transformations from 2D Projected Inputs (bibtex)
by M. Connor, B. Olshausen and C. Rozell
Abstract:
When interacting in a three dimensional world, humans must estimate 3D structure from visual inputs projected down to two dimensional retinal images. It has been shown that humans use the persistence of object shape over motion-induced transformations as a cue to resolve depth ambiguity when solving this underconstrained problem. With the aim of understanding how biological vision systems may internally represent 3D transformations, we propose a computational model, based on a generative manifold model, which can be used to infer 3D structure from the motion of 2D points. Our model can also learn representations of the transformations with minimal supervision, providing a proof of concept for how humans may develop internal representations on a developmental or evolutionary time scale. Focused on rotational motion, we show how our model infers depth from moving 2D projected points, learns 3D rotational transformations from 2D training stimuli, and compares to human performance on psychophysical structure-from-motion experiments.
Reference:
Learning Internal Representations of 3D Transformations from 2D Projected InputsM. Connor, B. Olshausen and C. Rozell. July 2023. Under review.
Bibtex Entry:
@article{connor.23b,
	title={Learning Internal Representations of {3D} Transformations from {2D} Projected Inputs},
	author={Connor, M. and Olshausen, B. and Rozell, C.},
	year= 2023,
	month = jul,
	abstract = {When interacting in a three dimensional world, humans must estimate 3D structure from visual inputs projected down to two dimensional retinal images. It has been shown that humans use the persistence of object shape over motion-induced transformations as a cue to resolve depth ambiguity when solving this underconstrained problem. With the aim of understanding how biological vision systems may internally represent 3D transformations, we propose a computational model, based on a generative manifold model, which can be used to infer 3D structure from the motion of 2D points. Our model can also learn representations of the transformations with minimal supervision, providing a proof of concept for how humans may develop internal representations on a developmental or evolutionary time scale. Focused on rotational motion, we show how our model infers depth from moving 2D projected points, learns 3D rotational transformations from 2D training stimuli, and compares to human performance on psychophysical structure-from-motion experiments.},
	note={Under review.},
	url = {http://arxiv.org/abs/2303.17776}
}
Powered by bibtexbrowser