The impact of multisensory integration deficits on speech perception in children with autism spectrum disorders.
URL with Digital Object Identifier
Speech perception is an inherently multisensory process. When having a face-to-face conversation, a listener not only hears what a speaker is saying, but also sees the articulatory gestures that accompany those sounds. Speech signals in visual and auditory modalities provide complementary information to the listener (Kavanagh and Mattingly, 1974), and when both are perceived in unison, behavioral gains in in speech perception are observed (Sumby and Pollack, 1954). Notably, this benefit is accentuated when speech is perceived in a noisy environment (Sumby and Pollack, 1954). To achieve a behavioral gain from multisensory processing of speech, however, the auditory and visual signals must be perceptually bound into a single, unified percept. The most commonly cited effect that demonstrates perceptual binding in audiovisual speech perception is the McGurk effect (McGurk and MacDonald, 1976), where a listener hears a speaker utter the syllable “ba,” and sees the speaker utter the syllable “ga.” When these two speech signals are perceptually bound, the listener perceives the speaker as having said “da” or “tha,” syllables that are not contained in either of the unisensory signals, resulting in a perceptual binding, or integration, of the speech signals (Calvert and Thesen, 2004).
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.