Electronic Thesis and Dissertation Repository

Thesis Format

Monograph

Degree

Master of Science

Program

Psychology

Supervisor

Butler, Blake E.

2nd Supervisor

Stevenson, Ryan A.

Affiliation

University of Western Ontario

Co-Supervisor

Abstract

This study examined audiovisual integration in cochlear implant (CI) users compared to typical (acoustic) hearing control participants and investigated the effect of audiovisual temporal asynchrony on speech intelligibility across these groups. Additionally, this study evaluated the utility of online data collection for audiovisual perception research. In Experiment 1, CI users were found to integrate audiovisual syllables comparably to controls as demonstrated by perception of the McGurk illusion. However, group differences were revealed in the processing of the unisensory components and underlying distributions of responses to incongruent audiovisual trials when the illusory fusion syllable was not reported. In Experiment 2, intelligibility of sentences presented in noise was more facilitated by the presence of visual cues and more inhibited by temporal offset for CI users than controls. Together these results indicate a functionally relevant difference in how CI users process and combine auditory and visual speech signals compared to control participants.

Summary for Lay Audience

When a person is seen speaking, our ability to understand their speech is supported by both the sound of the voice and visual cues arising from mouth movements. The relative amount that these auditory and visual cues contribute to understanding these multisensory signals varies depending on the situation. For instance, in noisy environments listeners watch a talker’s mouth closely to compensate for difficulty hearing their voice. People who use cochlear implants (CIs), hearing devices that bypass damaged regions of the ear to convey auditory information directly to the brain, may have a similar experience. Because the auditory signal produced by CIs is less clear than that conveyed by the typically-developed inner ear, CI users rely on visual speech cues more than those with typical hearing. The goal of this study was to investigate audiovisual integration in CI users compared to typical hearing controls and evaluate how audiovisual asynchrony affects speech comprehension in these groups. Experiment 1 used the McGurk illusion in which a speaker’s mouth is seen to say one syllable, like “ba”, while their voice is heard to say a different syllable, like “ga”. Because the brain automatically integrates audiovisual speech information, many people experience an illusory syllable, like “da”, that represents a fusion of the auditory and visual information. We found that CI users experience this illusion at a rate comparable to control participants. However, when they didn’t experience the illusion, CI users usually reported the seen syllable whereas control participants reported the heard syllable. In Experiment 2, participants watched videos of sentences spoken in background noise and typed what they heard. The sound and video were aligned for some sentences, and out of synch for others. The addition of visual cues enhanced accuracy more for CI users than control participants. CI users’ accuracy was also more inhibited by asynchrony than control participants. These findings indicate that CI users combine auditory and visual speech information differently than individuals with typical hearing and these differences affect CI users’ ability to understand asynchronous speech. This is pertinent given the increasing use of teleconferencing platforms, which are prone to audiovisual lag.

Share

COinS