Electronic Thesis and Dissertation Repository

Thesis Format

Integrated Article

Degree

Master of Science

Program

Neuroscience

Supervisor

Mur, Marieke

2nd Supervisor

Daley, Mark

Co-Supervisor

Abstract

Humans have the ability to learn visual representations of the surrounding environment with limited supervision. A major challenge in cognitive neuroscience is to understand the neural computations that give rise to this ability. Recent work has started modelling the neural computations implemented by the ventral visual system using deep convolutional neural networks (DCNNs). Despite their successes, DCNNs leave substantial amounts of variance in brain representations unexplained. We hypothesize that this may in part be due to the DNNs' sole reliance on supervision during representation learning. In this thesis, we investigate the role of training algorithms (supervised versus unsupervised) on the representational similarity between the computational models and brain data from human inferior temporal cortex. We show that one implementation of unsupervised contrastive learning yields more brain-like representations than the selected supervised learning method. Our findings suggest that human visual representations may in part arise from unsupervised learning during development.

Summary for Lay Audience

When we open our eyes, we instantly recognize the visual world around us. How does the brain so quickly make sense of the outside visual world? To address this question, we need to build computational models of the human visual system. Recent advances in deep learning have enabled the development of computational models that can perform real-life tasks such as object recognition. Like humans, the models need to 'develop' over a period of extensive learning. In this thesis, we examine the impact of learning goals on how the computational models learn to represent the outside visual world. We test whether certain learning goals give rise to more human-like object representations than others. We focus on one implementation of unsupervised learning - like a child discovering the world on their own - and one implementation of supervised learning - like a parent pointing at objects and naming them. We show that unsupervised learning gives rise to object representations that emphasize categories of behavioural relevance, including faces and animals. Furthermore, object representations learned through unsupervised learning show a closer match to human object representations than those learned through supervised learning. Our findings are consistent with the idea that unsupervised learning plays a role in object learning during human development.

Share

COinS