Welcome to SwapVAE (NeurIPS 2021 oral)!
The paper preprint can be found here.
If you use our code, please cite our paper as below.
[cite] Ran Liu, Mehdi Azabou, Max Dabagia, Chi-Heng Lin, Mohammad Gheshlaghi Azar,
Keith B. Hengen, Michal Valko, and Eva L. Dyer. "Drop, Swap, and Generate: A
Self-Supervised Approach for Generating Neural Activity." NeurIPS (2021).
Abstract
Meaningful and simplified representations of neural activity can yield insights into how and what information is being processed within a neural circuit. However, without labels, finding representations that reveal the link between the brain and behavior can be challenging. Here, we introduce a novel unsupervised approach for learning disentangled representations of neural activity called SwapVAE. Our approach combines a generative modeling framework with an instance-specific alignment loss that tries to maximize the representational similarity between transformed views of the input (brain state). These transformed (or augmented) views are created by dropping out neurons and jittering samples in time, which intuitively should lead the network to a representation that maintains both temporal consistency and invariance to the specific neurons used to represent the neural state. Through evaluations on both synthetic data and neural recordings from hundreds of neurons in different primate brains, we show that it is possible to build representations that disentangle neural datasets along relevant latent dimensions linked to behavior.
Methods
SwapVAE provides a new approach to decompose the latent factors that drive neural population activity. Our method aims to learn latent spaces that can appropriately reveal and disentangle different sources of variability without using labels.
SwapVAE is loosely inspired by methods used in computer vision that aim to decompose images into their content and style. To decompose brain states, we consider the execution of movements and their representation within the the brain. The content in this case may be knowing where to go (target location) and the style would be the exact execution of the movement (the movement dynamic). SwapVAE disentangles the neural representation of movement into these components.
Contributions
Our contributions are as follows:
- We propose a generative method, SwapVAE, that can both (i) learn a representation of neural activities that reveal meaningful and interpretable latent factors and (ii) generate realistic neural activities.
- To further encourage disentanglement, we introduce a novel latent space augmentation called BlockSwap, where we swap the content between two views and ask the network to predict the original view from the content of another view.
- We introduce metrics to quantify the disentanglement of our representations of behavior and apply them to neural datasets from different non-human primates to gain insights into the link between neural activity and behavior.