
Understanding Protein Deep Learning Models through Explainability
Abstract
This thesis investigates the application of Explainable Artificial Intelligence (XAI) techniques to deep learning models in the field of protein analysis, specifically targeting protein language models and protein interaction site prediction models. Despite the increasing adoption of these sophisticated deep learning models in bioinformatics, their intrinsic complexity often results in a black-box nature, obscuring the understanding of their decision-making processes.
This research represents a thorough effort to integrate explanation methods within this context. We analyze the resulting interpretations using biological-specific statistical tests to enhance the transparency and interpretability of the models. This work evaluates the efficacy of current XAI methods applied to protein analysis through a comprehensive set of experiments.