Electronic Thesis and Dissertation Repository

Thesis Format

Integrated Article

Degree

Doctor of Philosophy

Program

Computer Science

Supervisor

Sedig, Kamran

2nd Supervisor

Lizotte, Daniel J.

Co-Supervisor

Abstract

Artificial Intelligence (AI) systems exhibit considerable potential in providing decision support across various domains. In this context, the methodology of eXplainable AI (XAI) becomes crucial, as it aims to enhance the transparency and comprehensibility of AI models' decision-making processes. However, after a review of XAI methods and their application in clinical decision support, there exist notable gaps within the XAI methodology, particularly concerning the effective communication of explanations to users.

This thesis aims to bridge these existing gaps by presenting in Chapter 3 a framework designed to communicate AI-generated explanations effectively to end-users. This is particularly pertinent in fields like healthcare, where the successful implementation of AI decision support hinges on the ability to convey actionable insights to medical professionals.

Building upon this framework, subsequent chapters illustrate how visualization and visual analytics can be used with XAI in the context of clinical decision support. Chapter 4 introduces a visual analytic tool designed for ranking and triaging patients in the intensive care unit (ICU). Leveraging various XAI methods, the tool enables healthcare professionals to understand how the ranking model functions and how individual patients are prioritized. Through interactivity, users can explore influencing factors, evaluate alternate scenarios, and make informed decisions for optimal patient care.

The pivotal role of transparency and comprehensibility within machine learning models is explored in Chapter 5. Leveraging the power of explainable AI techniques and visualization, it investigates the factors contributing to model performance and errors. Furthermore, it investigates scenarios in which the model outperforms, ultimately fostering user trust by shedding light on the model's strengths and capabilities.

Recognizing the ethical concerns associated with predictive models in health, Chapter 6 addresses potential bias and discrimination in ranking systems. By using the proposed visual analytic tool, users can assess the fairness and equity of the system, promoting equal treatment. This research emphasizes the need for unbiased decision-making in healthcare.

Having developed the framework and illustrated ways of combining XAI with visual analytics in the service of clinical decision support, the thesis concludes by identifying important future directions of research in this area.

Summary for Lay Audience

Our study centers on utilizing the potential of Artificial Intelligence (AI) to enhance decision-making in various fields, with a special focus on healthcare. We address the challenge of interpreting AI systems' complex decision-making through the methodology of eXplainable AI (XAI). Our focus lies in bridging the gap between complex AI insights and user comprehension, fostering trust and informed choices. To achieve this, we propose a framework that combines Visual Analytics (VA) and XAI methodologies, creating a more intuitive way to communicate AI-generated insights to users. Through case studies, we demonstrate how this combined approach enhances transparency in AI decisions, especially in healthcare scenarios.

The thesis comprises 7 chapters. The first chapter discusses motivation and provides an overview of the thesis structure. The second chapter explains the keywords and terminology used throughout the thesis. The third chapter reviews existing XAI methods and their relevance to clinical decision support, pinpointing areas where they fall short in practical application. It introduces an interactive visualization framework to bridge this gap, aiding healthcare professionals in better comprehending AI models.

Chapter 4 introduces a visual analytic tool designed to explain ranking systems, with a case study focusing on patient ranking and prioritization in intensive care units (ICUs). Leveraging various XAI methods, this tool enables healthcare providers to grasp the inner workings of ranking models and prioritize patients based on critical factors.

Chapter 5 employs an XAI technique to identify areas where a machine learning model underperforms, offering users valuable insights to approach such situations with care. This is illustrated through case studies on the detection of septic shock in ICUs.

Chapter 6 introduces a visual analytics tool tailored to investigate potential biases within ranking systems, illustrated by a case study on ICU admissions. Chapter 7 gives a summary of the previous chapters and concludes the thesis, encapsulating the key findings and contributions. Overall, our research aims to strengthen the connection between AI and healthcare professionals, fostering transparency, trust, and fairness in AI-driven decision support.

Share

COinS