Doctor of Philosophy
Evolutionary algorithms have recently re-emerged as powerful tools for machine learning and artificial intelligence, especially when combined with advances in deep learning developed over the last decade. In contrast to the use of fixed architectures and rigid learning algorithms, we leveraged the open-endedness of evolutionary algorithms to make both theoretical and methodological contributions to deep reinforcement learning. This thesis explores and develops two major areas at the intersection of evolutionary algorithms and deep reinforcement learning: generative network architectures and behaviour-based optimization. Over three distinct contributions, both theoretical and experimental methods were applied to deliver a novel mathematical framework and experimental method for generative, modular neural network architecture search for reinforcement learning, and a generalized formulation of a behaviour- based optimization framework for reinforcement learning called novelty search. Experimental results indicate that both alternative, behaviour-based optimization and neural architecture search can each be used to improve learning in the popular Atari 2600 benchmark compared to DQN — a popular gradient-based method. These results are in-line with related work demonstrating that strictly gradient-free methods are competitive with gradient-based reinforcement learning. These contributions, together with other successful combinations of evolutionary algorithms and deep learning, demonstrate that alternative architectures and learning algorithms to those conventionally used in deep learning should be seriously investigated in an effort to drive progress in artificial intelligence.
Summary for Lay Audience
Artificial neural networks (ANNs) have become popular tools for implementing many kinds of machine learning and artificially intelligent systems. While popular, there are many outstanding questions about how ANNs should be structured, and how they should be trained. Of particular interest is the branch of machine learning called reinforcement learning, which focuses on training artificial agents to perform complex, sequential tasks, like playing video games or navigating a maze. In this thesis, three contributions to research at the intersection of ANNs and reinforcement learning are presented. First, a mathematical language that generalizes multiple contemporary ways of describing neural network organization, second, an evolutionary algorithm that uses this mathematical language to help define an algorithm for machine learning with ANNs in which the network's architecture can be modified during training by the algorithm, and third, a related algorithm that experiments with an alternative method to training ANNs for reinforcement learning called novelty search, which promotes behavioural diversity over greedy reward seeking behaviour. Experimental results indicate that evolutionary algorithms, a form of random search guided by evolutionary principles of selection pressure, are competitive alternatives to conventional deep learning algorithms such as error back propagation. Results also show that architectural mutability – the ability for network architectures to change automatically during training – can dramatically improve learning performance in games over contemporary methods.
Jackson, Ethan C., "Algebraic Neural Architecture Representation, Evolutionary Neural Architecture Search, and Novelty Search in Deep Reinforcement Learning" (2019). Electronic Thesis and Dissertation Repository. 6510.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.