Sensory Information Processing Lab

what we do

The activity in the brain gives rise to the perceptions, behaviors, emotions and cognition that make us intelligent. Many of today's grand challenges involve understanding how the brain produces this intelligence and how to interface it with artificially intelligent systems for the benefit of society.

Recent technical advances are providing unprecedented opportunities for dynamic interactions between our engineered systems and the brain, including new techniques for recording and manipulating neural circuit activity as well as new machine learning systems that must integrate with human intelligence. While these advances provide new platforms for interfacing with the brain, we often lack a principled approach for what to do with this technology to be most effective.

The primary goal of the SIPLab is to develop the algorithmic foundations for interfacing, understanding and exploiting neural systems.

We aim to advance basic science, clinical medicine and the engineering of intelligent systems.

We pursue these objectives using direct and indirect (invasive and non-invasive) methods of integrating engineered systems in closed-loop interactions with biology across scales from single cells to human intelligence. I broadly describe this research as computational neuroengineering, where we combine the rigorous mathematical approaches of engineering (e.g., machine learning, signal processing) with a deep grounding in the sciences to improve our interactions between mind and machine.

These interactions lead us to new ways to conduct experiments to learn about neural systems, new ways to treat diseases of these systems, and new ways to build machines that learn from the intelligence embodied in these systems. It also often leads to advances in data science algorithms that have applications far beyond the original inspiration.

Focus areas are always evolving, but currently include algorithms and analysis for:

  • open-loop and closed-loop stimulation (electrical and optogenetic) of neural circuits;
  • interactive machine learning and interactive artificial intelligence (including similarity learning, transfer learning and cooperative search, etc.);
  • discovering and interpreting the structure of complex dynamical systems;
  • dimensionality reduction (including compressed sensing), manifold learning and dynamic filtering;
  • computational imaging in microscopy and remote sensing;
  • state estimation and other analysis algorithms for electrophysiology data;
  • fast numerical optimization algorithms for machine learning, including optimal transport regularization for inverse problems;
  • brain-computer interfaces; and
  • normative models of sensory information processing and neural computing, including novel neuromimetic computing approaches.

Technically, we approach this research using tools the core philosophy that high-dimensional data typically has information of interest that is low-dimensional and can be represented geometrically (e.g., linear subspaces, sparsity models, manifolds, and dynamical system attractors). Please see our publications for more information on our current research outcomes. SIPLab members are affiliated with the Neural Engineering Center and the Center for Signal and Information Processing at Georgia Tech.


Hiring postdoc

SIPLab is now hiring a postdoc position at the intersection of computational neuroscience and closed-loop optogenetic stimulation. Click "read more" to see the full ad.

Upcoming research at AAAI

Stefano Fenu, Marissa Connor and Greg Canal will present their work at AAAI this week. "Active Ordinal Querying for Tuplewise Similarity Learning" is an oral presentation in Sunday 11:15 session in Concourse A (poster 217 Sun at 7:20pm). "Representing Closed Transformation Paths in Encoded Network Latent Space" is poster 212 on Monday at 7:20pm. Come check it out!

Upcoming research at NeurIPS

John Lee will present his work on "Hierarchical Optimal Transport for Multimodal Distribution Alignment" on Thursday Dec 12 (10:45am-12:45pm) in East Exhibition Hall B + C #56. John will also present three posters Friday at the Workshop on Optimal Transport for Machine Learning (East Ballroom C). Come check out the latest work from the lab!

Bertrand defends Ph.D.

Congratulations to Dr. Nicholas Bertrand, who defended his Ph.D. theses titled "Exploiting Structure in Dynamical Systems for Tracking and Dimensionality Reduction". Algorithms for multiple stages in the data pipeline, with applications in electrophysiology and infrared imaging. Best of luck!

New NIH grant on closed-loop neural stimulation

A new NIH R01 grant has been awarded to the lab (in collaboration with Garrett Stanley) to develop new algorithms and hardware technologies for closed-loop optogenetic stimulation in neural circuits. Excited about the future of this project for precisely controlling neural circuits!

Lee defends Ph.D.

Congratulations to Dr. John Lee, who defended his Ph.D. theses titled "Exploiting Low-dimensional Structure and Optimal Transport for Tracking and Alignment". State of the art algorithms with applications in many data domains, including electrophysiology. Best of luck!

Paper accepted in International Conference on Machine Learning (ICML)

The paper "Active Embedding Search via Noisy Paired Comparisons" by Canal et al. presents new active learning methods for finding user preferences from relational queries ("do you prefer A or B?"). Congrats Greg!

Rozell receives teaching award

Dr. Rozell was selected as the recipient of the Class of 1940 W. Howard Ector Outstanding Teacher Award, the top teaching award at Georgia Tech.