
Thesis Format
Monograph
Degree
Doctor of Philosophy
Program
Health and Rehabilitation Sciences
Supervisor
Parsa, Vijay
Abstract
Hearing loss affects approximately 1.59 billion individuals globally, with projections indicating that nearly 2.5 billion will be impacted by 2050. Despite the increasing prevalence, many individuals delay seeking help, even in well-resourced settings, leading to a significant gap between clinical diagnoses and self-reported difficulties. In Canada, while 19.4% of the population exhibits measurable hearing loss, only 3.7% perceive their impairment, highlighting the need for improved hearing assessment methodologies. A key challenge in hearing impairment is difficulty understanding speech in noise, which traditional pure-tone audiometry fails to capture effectively. This thesis investigates the integration of Automatic Speech Recognition (ASR) models into speech-in-noise (SiN) testing to enhance hearing aid evaluations and enable automated, scalable, and clinically relevant assessments. The research examines ASR models under diverse acoustic conditions, demonstrating their effectiveness in quantifying Signal-to-Noise Ratio (SNR) loss—an essential measure of functional hearing ability. Results show that ASR-based scoring aligns closely with audiologist evaluations, reinforcing the potential of these models to support clinical decisionmaking and improve access to reliable hearing assessments.
A key outcome of this research is the development of an automated, two-version desktop graphical user interface (GUI) for administering SiN tests. This tool facilitates test playback, response recording, and real-time SNR loss computation while enabling seamless de-identified data uploads to cloud-based ASR services, such as Amazon Web Services (AWS) and Microsoft Azure. The study also explores the electroacoustic evaluation of hearing aids under various speech and noise configurations, including the impact of face masks, directional microphone settings, and reverberation levels. Findings reveal that ASR models can effectively process hearing aid test recordings without requiring clean reference signals, offering a more scalable alternative to traditional electroacoustic assessments.
To further bridge accessibility gaps, a cross-platform mobile application was developed, integrating an on-device ASR model for self-administered SiN testing. The app enables individuals to assess their speech-in-noise performance remotely, supporting offline functionality for users in areas with limited internet access. Pilot testing with normal-hearing adults demonstrated that the mobile app reliably captures and processes SiN responses across different microphone configurations and loudspeaker setups, achieving performance comparable to cloud-based ASR solutions.
This work contributes to the field of audiology by advancing ASR-driven hearing assessments, improving accessibility, and reducing reliance on clinic-based evaluations. By integrating ASR technologies into automated testing frameworks and mobile applications, this research lays the groundwork for more inclusive, efficient, and scalable solutions in hearing healthcare. These findings have direct implications for early intervention strategies, hearing aid optimization, and the broader adoption of tele-audiology solutions in real-world environments.
Summary for Lay Audience
Hearing loss is a growing global health issue, affecting about 1.59 billion people today, with numbers expected to reach 2.5 billion by 2050. Despite the availability of hearing care, many people delay getting tested, even in countries with good healthcare services. In Canada, nearly 20% of the population has measurable hearing loss, yet only a small percentage report difficulty, showing a gap between professional assessments and how people perceive their own hearing. One major challenge for those with hearing loss is understanding speech in noisy environments, which traditional hearing tests, such as pure-tone audiograms, do not always measure effectively.
This research focuses on improving speech-in-noise (SiN) testing by using Automatic Speech Recognition (ASR) technology to enhance the scoring of existing speech-in-noise tests. These tests, such as the Quick Speech-in-Noise (QuickSIN) test and the Connected Speech Test (CST), involve playing pre-recorded spoken sentences in background noise and asking individuals to repeat what they hear. Traditional scoring methods require audiologists to manually evaluate responses, which can be time-consuming and subjective. ASR technology automates this process by transcribing the recorded responses and comparing them to the original test sentences, providing objective measures such as word error rate (WER) and speech intelligibility scores. These insights give a clearer picture of real-world hearing difficulties and can help with selecting appropriate hearing aids and setting realistic expectations for users. To support this, a new software tool was developed to automate speech-in-noise testing. This system securely uploads recorded hearing test responses to cloud services like Amazon Web Services (AWS) and Microsoft Azure, where ASR algorithms transcribe and analyze the speech. The software calculates key metrics, such as WER and signal-to-noise ratio (SNR) loss, to assess how well someone understands speech in background noise. Additionally, a mobile application was created to allow people to test their hearing at home without needing an internet connection. The app is easy to use and works offline, making it accessible for people in remote areas or those who cannot visit a clinic.
Recommended Citation
Hassanpour Aslishirjouposht, Arman, "Speech Intelligibility Assessment using Automatic Speech Recognition" (2024). Electronic Thesis and Dissertation Repository. 10684.
https://ir.lib.uwo.ca/etd/10684
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Included in
Biomedical Commons, Signal Processing Commons, Speech and Hearing Science Commons, Speech Pathology and Audiology Commons, Telemedicine Commons