Electronic Thesis and Dissertation Repository

Temporal dynamics of natural sound categorization

Ali Tafakkor, Western University

Abstract

While extensive research has elucidated the brain’s processing of semantics from speech sound waves and their mapping onto the auditory cortex, the temporal dynamics of how meaningful non-speech sounds are processed remain less examined. Understanding these dynamics is key to resolving the debate between cascaded and parallel hierarchical processing models, both plausible given the anatomical evidence. This study investigates how semantic category information from environmental sounds is processed in the temporal domain, using electroencephalography (EEG) collected from 25 participants and representational similarity analysis (RSA) along with models of acoustic and semantic information. We examined information extracted by the brain from 80 onesecond natural sounds across four categories. The results revealed a cascaded temporal hierarchy of processing of information towards identifying the sound category, which supports the well established anatomical hierarchy. Low-level information is decodable at ~ 30 ms, and semantic information begins to emerge ~ 40 ms later. We conclude that basic information transforms to more complex information over time, while semantic representations are more stable over time than representations of acoustic information.