
Contrastive Learning of Auditory Representations
Abstract
Learning rich visual representations using contrastive self-supervised learning has been extremely successful. However, it is still a major question whether we could use a similar approach to learn more efficient auditory and audio-visual representations. In this thesis, we expand on prior self-supervised methods to learn better auditory and audio-visual representations. We introduce various data augmentations suitable for auditory and audio-visual data and evaluate their impact on predictive performance, and demonstrate that training with both supervised and contrastive losses simultaneously improves the learned representations compared to self-supervised pre-training followed by supervised fine-tuning. We illustrate that by combining all these methods and with substantially less labeled data, our framework achieves significant improvement on prediction performance compared to the supervised approach. Moreover, compared to the self-supervised approach, our framework converges faster with significantly better representations.