Master of Science
Johnsrude, Ingrid S.
While extensive research has elucidated the brain’s processing of semantics from speech sound waves and their mapping onto the auditory cortex, the temporal dynamics of how meaningful non-speech sounds are processed remain less examined. Understanding these dynamics is key to resolving the debate between cascaded and parallel hierarchical processing models, both plausible given the anatomical evidence. This study investigates how semantic category information from environmental sounds is processed in the temporal domain, using electroencephalography (EEG) collected from 25 participants and representational similarity analysis (RSA) along with models of acoustic and semantic information. We examined information extracted by the brain from 80 onesecond natural sounds across four categories. The results revealed a cascaded temporal hierarchy of processing of information towards identifying the sound category, which supports the well established anatomical hierarchy. Low-level information is decodable at ~ 30 ms, and semantic information begins to emerge ~ 40 ms later. We conclude that basic information transforms to more complex information over time, while semantic representations are more stable over time than representations of acoustic information.
Summary for Lay Audience
Our brains are constantly processing information from our surroundings to help us understand and interact with the world. One way we do this is through grouping similar objects, events, or ideas based on their shared characteristics or meanings. This process, called semantic categorization, is not just limited to what we see, but also extends to sounds we hear. Consider the non-speech sounds you hear daily, like a kettle’s whistle. These sounds carry important ’semantics’ or meanings that help us understand our environment. By studying how our brain processes these sounds, we can gain insights into our cognitive abilities and understand how they might be affected by aging or neurological disorders. The processing of sounds in our brain involves multiple stages, from the initial reception of the sound waves in our ears to the complex functions carried out by the cortex, the outer layer of our brain. Studies have shown that there are multiple regions in the brain involved in processing sounds, and these regions are interconnected. Research has also shown that these different regions are responsible for processing different levels of information, from basic acoustic features to complex semantic meanings. The question is whether the complexity of the information processed also increases over time. In a recent study, we used electroencephalography (EEG), a method that records electrical activity of the brain, to investigate how the brain processes semantic information from environmental sounds over time. We found that the brain first decodes basic information about the sound around 30 milliseconds after hearing it, and then starts to extract semantic information about 40 milliseconds later. This suggests that our brain transforms basic information into more complex information over time, which confirms our knowledge from the brain anatomy of auditory processing regions. In simple terms, when we hear a sound, our brain quickly identifies its basic features and then takes a bit more time to figure out what it means, instead of these two stages happening simultaneously. This process helps us understand and respond to our environment effectively.
Tafakkor, Ali, "Temporal dynamics of natural sound categorization" (2023). Electronic Thesis and Dissertation Repository. 9452.