Welcome to BrainSynth!
Our paper is accepted by MICCAI 2020. Preprint can be found here.
If you use our code, please cite our paper as below.
[cite] Ran Liu, Cem Subakan, Aishwarya H. Balwani, Jennifer D. Whitesell, Julie A. Harris, Sanmi Koyejo, and Eva Dyer. "A generative modeling approach for interpreting population-level variability in brain structure." bioRxiv (2020).
Methods
Understanding how neural structure varies across individuals is critical for characterizing the effects of disease, learning, and aging on the brain. In our work BrainSynth, we introduce a deep generative modeling approach to find different modes of variation across many individuals.
Our results demonstrate that through coupling generative modeling frameworks with structured perturbations, it is possible to probe the latent space to provide insights into the representations of brain structure formed in deep neural networks.
Overview image of our bi-directional technique (click to view full-size image).
We trained a variational autoencoder (VAE) on a collection of auto-fluorescence images from over 1,700 mouse brains at 25 micron resolution. To then tap into the learned factors and validate the model's expressiveness, we developed a novel bi-directional technique to interpret the latent space--by making structured perturbations to both, the high-dimensional inputs of the network, as well as the low-dimensional latent variables in its bottleneck.
Team
-
Ran Liu (RanL), Cem Subakan, Aishwarya H. Balwani (AishwaryaHB), Jennifer Whitesell,
Julie Harris, Sanmi Koyejo, Eva Dyer (evadyer).