Doctor of Philosophy
Neuroscience and Computer Science
Daley, Mark J.
Goodale, Melvyn A.
In biological neural networks (BNNs), structure provides a set of guard rails by which function is constrained to solve tasks effectively, handle multiple stimuli simultaneously, adapt to noise and input variations, and preserve energy expenditure. Such features are desirable for artificial neural networks (ANNs), which are, unlike their organic counterparts, practically unbounded, and in many cases, initialized with random weights or arbitrary structural elements. In this dissertation, we consider an inductive base case for imposing BNN constraints onto ANNs. We select explicit connectome topologies from the fruit fly (one of the smallest BNNs) and impose these onto a multilayer perceptron (MLP) and a reservoir computer (RC), in order to craft “fruit fly neural networks” (FFNNs). We study the impact on performance, variance, and prediction dynamics from using FFNNs compared to non-FFNN models on odour classification, chaotic time-series prediction, and multifunctionality tasks. From a series of four experimental studies, we observe that the fly olfactory brain is aligned towards recalling and making predictions from chaotic input data, with a capacity for executing two mutually exclusive tasks from distinct initial conditions, and with low sensitivity to hyperparameter fluctuations that can lead to chaotic behaviour. We also observe that the clustering coefficient of the fly network, and its particular non-zero weight positions, are important for reducing model variance. These findings suggest that BNNs have distinct advantages over arbitrarily-weighted ANNs; notably, from their structure alone. More work with connectomes drawn across species will be useful in finding shared topological features which can further enhance ANNs, and Machine Learning overall.
Summary for Lay Audience
Biologically-motivated brain structure provides a unique set of constraints, which have been sharpened through evolutionary pressures in order to achieve efficiency, versatility, and the capacity to perform multiple tasks simultaneously. Such outcomes are desirable for artificial neural networks (ANNs), which are often initialized with randomized connections or network weights. Herein, we conduct a set of four studies to understand the benefits, disadvantages, and dynamics of model behaviour resulting from using explicit brain structure – via a map of brain connectivity – to inform the structure of numerous ANNs. As an initial step into this line of investigation, we start small, using a brain map from the common fruit fly. We determine how well a fly-based ANN classifies odours, how well it makes predictions across time, and the extent to which it is able to perform two tasks simultaneously. We find that the fly network topology translates well into a machine learning architecture for time-series prediction and multi-tasking, and also that it resists parameter changes which typically lead to model behaviours characterized by high sensitivity to initial conditions. Moreover, we report that the position of neurons (in relation to others they are connected to) and the way that neurons cluster together are important in a machine learning context for reducing performance variability. Our findings suggest that the fly brain is wired in a way which is beneficial for learning from timeseries data or for completing multiple tasks concurrently. A follow-up idea from this work is that such behavioural advantages are not unique to the fly, but common across brain networks. To investigate this, we will run similar experiments on other animal brain maps.
Morra, Jacob, "Connectome-Constrained Artificial Neural Networks" (2023). Electronic Thesis and Dissertation Repository. 9518.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.