Mine Your Own vieW: Self-Supervised Learning Through
Across-Sample Prediction

Mehdi Azabou1 Mohammad Gheshlaghi Azar2 Ran Liu1 Chi-Heng Lin1 Erik C. Johnson3 Kiran Bhaskaran-Nair4 Max Dabagia1 Keith Hengen4 William Gray-Roncal3 Michal Valko5 Eva Dyer1,6
1Georgia Tech, 2DeepMind London UK, 3Johns Hopkins University Applied Physics Laboratory, 4Washington University in St. Louis, 5DeepMind Paris, 6Emory University

Paper

Code

Docs

Arxiv
Abstract
State-of-the-art methods for self-supervised learning (SSL) build representations by maximizing the similarity between different augmented “views” of a sample. Because these approaches try to match views of the same sample, they can be too myopic and fail to produce meaningful results when augmentations are not sufficiently rich. This motivates the use of the dataset itself to find similar, yet distinct, samples to serve as views for one another. In this paper, we introduce Mine Your Own vieW (MYOW), a new approach for building across-sample prediction into SSL. The idea behind our approach is to actively mine views, finding samples that are close in the representation space of the network, and then predict, from one sample's latent representation, the representation of a nearby sample. We apply MYOW to benchmark image datasets and in a new application in neuroscience, where we show that MYOW is competitive with state-of-the-art methods on downstream tasks. When applied to neural datasets, MYOW outperforms other self-supervised approaches in all examples (in some cases by more than 10%), and surpasses the supervised baseline for most datasets. By learning to predict the latent representation of nearby samples, we show that it is possible to learn good representations in new domains where augmentations are still limited.