Electronic Thesis and Dissertation Repository

Thesis Format

Monograph

Degree

Doctor of Philosophy

Program

Electrical and Computer Engineering

Supervisor

Samarabandu, Jagath

Abstract

While deep neural networks have demonstrated excellent learning capabilities, explainability of model predictions remains a challenge due to their black box nature. Attributions or feature significance methods are tools for explaining model predictions, facilitating model debugging, human-machine collaborative decision making, and establishing trust and compliance in critical applications. Recent work has shown that attributions of neural networks can be distorted by imperceptible adversarial input perturbations, which makes attributions unreliable as an explainability method. This thesis addresses the research problem of attribution robustness of neural networks and introduces novel techniques that enable robust training at scale.

Firstly, a novel generic framework of loss functions for robust neural net training is introduced, addressing the restrictive nature of existing frameworks. Secondly, the bottleneck issue of high computational cost of existing robust objectives is addressed by deriving a new, simple and efficient robust training objective termed “cross entropy of attacks”. It is 2 to 10 times faster than existing regularization-based robust objectives for training neural nets on image data while achieving higher attribution robustness (3.5% to 6.2% higher top-k intersection).

Thirdly, this thesis presents a comprehensive analysis of three key challenges in attribution robust neural net training: the high computational cost, the trade-off between robustness and accuracy, and the difficulty of hyperparameter tuning. Empirical evidence and guidelines are provided to help researchers navigate these challenges. Techniques to improve robust training efficiency are proposed, including hybrid standard and robust training, using a fast one-step attack, and optimized computation of integrated gradients, yielding 2x to 6x speed gains.

Finally, an investigation of two properties of attribution robust neural networks is conducted. It is shown that attribution robust neural nets are also robust against image corruptions, achieving accuracy gains of 3.58% to 11.94% over standard models. Empirical results suggest that robust models do not exhibit resilience against spurious correlations.

This thesis also presents work on utilizing deep learning classifiers in multiple application domains: an empirical benchmark of deep learning in intrusion detection, an LSTM-based pipeline for detecting structural damage in physical structures, and a self-supervised learning pipeline to classify industrial time-series in a label efficient manner.

Summary for Lay Audience

Deep learning, a branch of artificial intelligence, has transformed how machines learn from data, making significant advancements in technology. However, understanding how these systems make their decisions has been a challenge, mainly because their processes are not always transparent. This issue of transparency is crucial, especially when these systems are used in areas where trust and reliability are paramount, like in healthcare or self-driving cars.

My research focuses on improving the transparency and reliability of deep learning systems. Specifically, I developed new methods to ensure that the explanations provided by these systems about their decisions remain accurate, even when faced with misleading information that could cause errors. This is important because, just like in human decision-making, understanding why a decision was made is often as critical as the decision itself.

One of the main achievements of my work is a novel approach to training these artificial intelligence systems that is both faster and produces more reliable explanations than previous methods. This means that we can now make these large AI systems more understandable and trustworthy more efficiently. Additionally, my research has shown that making these systems' decision-making processes more transparent not only helps in understanding their choices but also improves their performance in handling poor-quality data, such as blurry images.

Beyond enhancing the transparency and reliability of AI decisions, my thesis applies these advanced deep learning techniques to solve practical problems in various fields, from detecting security breaches in computer networks to assessing structural damage in buildings and efficiently analyzing industrial data.

In summary, my thesis aims to make deep learning AI systems more transparent, understandable, and trustworthy, ensuring that as these technologies become more integrated into our lives, they can be relied upon to make decisions in a way that we can understand and trust.

Creative Commons License

Creative Commons Attribution 4.0 License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS