Electronic Thesis and Dissertation Repository

Thesis Format

Integrated Article

Degree

Doctor of Philosophy

Program

Computer Science

Collaborative Specialization

Artificial Intelligence

Supervisor

Mercer, Robert E.

2nd Supervisor

Rudzicz, Frank

Affiliation

Dalhousie University, University of Toronto, Vector Institute

Co-Supervisor

Abstract

Natural Language Understanding (NLU) resides at the intersection of artificial intelligence, linguistics, and computer science, with the goal of empowering machines to comprehend and interpret human languages in a way that is both significant and contextually pertinent. The intrinsic complexity of human language, marked by its subtleties, cultural variances, and dependence on context, poses a significant challenge to NLU. The real world is a vast repository of knowledge that encompasses not only facts but also complex relationships, dynamic concepts, and cultural subtleties. This external knowledge represents the context that is often implicitly assumed in human communication. For machines to fully capture the nuances of language, access to this wide array of external knowledge is essential. By incorporating this knowledge, NLU systems can transcend the basic syntax and semantics of text, facilitating a deeper understanding that resonates with human cognition and perception. In this dissertation, to bridge the gap between external knowledge and NLU systems, I investigate knowledge-grounded techniques aimed at enhancing the capabilities of NLU systems, with a specific focus on their application in extreme multi-label text classification (XMTC) within the biomedical and clinical literature domains.

This thesis makes three contributions to the integration of external knowledge into NLU systems. Firstly, it delves into the incorporation of knowledge within the attention component of a multi-label deep learning framework. This novel approach employs a dynamic knowledge-enhanced mask attention mechanism that merges external knowledge with label features to dynamically construct an attention mask for each biomedical article. This method effectively narrows down the candidate label set, thereby enhancing classification performance. Secondly, I introduce a retrieve and re-rank framework specifically designed for XMTC tasks, where external knowledge is integrated at the retrieval stage through the exploration of the correlation between labels and knowledge. This strategy refines the selection process of candidate labels, thus improving the indexing accuracy and efficiency. Lastly, external knowledge is integrated at the re-ranking stage by infusing label-centric knowledge into the ranker through zero-shot contrastive learning. This innovative approach enables the model to successfully predict unseen labels, optimizing the efficiency of the XMTC task.

Summary for Lay Audience

Natural Language Understanding (NLU) stands at the crossroads of artificial intelligence, linguistics, and computer science, aiming to enable machines to grasp and interpret human language in a meaningful and context-aware manner. Human language, with its intricate nuances, cultural diversity, and context-dependency, presents a formidable challenge to NLU. The real world is a treasure trove of knowledge, filled not just with facts, but with complex relationships, evolving ideas, and subtle cultural nuances. This external knowledge provides the contextual backdrop often taken for granted in human conversations. For machines to truly understand the subtleties of language, they must tap into this vast expanse of external knowledge. Integrating this knowledge allows NLU systems to move beyond mere the structure and meaning of words and sentences, enabling a richer understanding akin to human cognition and perception.

In this thesis, I aim to enhance NLU tasks by weaving external knowledge into the fabric of the systems, particularly focusing on extreme multi-label text classification (XMTC) in biomedical and clinical texts. I explore two key questions: ``What external knowledge should be considered?" and ``How can external knowledge be integrated?" For XMTC tasks in particular, the choice of external knowledge is crucial for providing the necessary context that aids in accurately categorizing texts with multiple labels. To tackle the first question, I explore a diverse array of external knowledge sources, such as metadata, medical ontologies, and hierarchical label information. The second question focuses on the strategies for effectively incorporating the selected external knowledge into NLU models. This involves exploring various approaches such as attention mechanisms that allow models to focus on relevant parts of the external knowledge in relation to the text being processed, graph and statistical methods for mapping relationships between concepts, and embedding techniques for the encoding and incorporation of knowledge into the learning process of models. By thoroughly exploring these questions, the thesis aims to provide a comprehensive framework for leveraging external knowledge in NLU tasks.

Share

COinS