Thesis Format
Integrated Article
Degree
Master of Science
Program
Neuroscience
Collaborative Specialization
Music Cognition
Supervisor
Ingrid Johnsrude
Abstract
The human auditory system can decompose complex sound mixtures into distinct perceptual auditory objects through a process (or processes) known as Auditory Scene Analysis. Pitch and spatial cues are among the sound attributes known to influence sequential streaming (Plack 2018). In this project, the fidelity of a virtual acoustic space (the Audio Dome) in reproducing precisely located sound sources with a 9th-order ambisonics algorithm was validated. The estimated horizontal Minimum Audible Angles aligned with previously reported values (Mills 1958) homogeneously across the space, and a robust low-frequency presentation was identified. Then, the Audio Dome was utilized to test van Noorden's (1975) ABA paradigm with displaced A and B sources on a continuum of locations and several pitch differences. A two-dimensional sigmoid function was utilized to model this two-dimensional psychophysical space and revealed that spatial and pitch cues are both essential to organize perception, with pitch cues perhaps being more influential.
Summary for Lay Audience
In our daily lives, we are surrounded by lots of sounds that are most likely heard with other sounds. For example, when passing by a park, one might hear birds tweeting, children playing, people talking, bikes crossing, etc. These sounds all spread in the air, get mixed, and the mixture of sounds reaches our ears. Our auditory system can distinguish these sounds from the mixture and link them to the surrounding objects and events. This ability, referred to as “Auditory Scene Analysis”, relies on several attributes of sounds to distinguish them. For example, the sounds that come from the right of the body are less likely to be related to some object that is heard from the left. Therefore, the human mind uses location information to assign the sounds to two different objects. Similarly, when a female and a male voice are heard in a radio show, their sounds come from the same location, yet we can tell apart their sounds based on other qualities of sound. One of these qualities is “pitch,” the same quality that enables us to distinguish between different notes played on a piano. Now the question is, what would happen if one distinguished sounds based on their pitch but their location is not different? (And vice versa?) Would the human mind rely on only one attribute and neglect the other, or do they cooperate? The present work shows that location and pitch are both important for the mind to analyze auditory scenes, with pitch cues perhaps being more influential. These results help us understand the importance of sound attributes for audition, which is essential for designing functional hearing aids. A virtual auditory space (the Audio Dome) was used to manipulate sound source locations and create auditory scenes to accomplish this goal. Because the Audio Dome was not previously used for research purposes, the suitability of this newly installed device for auditory research with humans was established in the first study. The validation experiment results should reassure researchers who wish to use the Audio Dome for further auditory research experiments.
Recommended Citation
Zargarnezhad, Nima, "Validation of a virtual auditory space, and its use to investigate how pitch and spatial cues contribute to perceptual segregation of auditory streams" (2024). Electronic Thesis and Dissertation Repository. 9937.
https://ir.lib.uwo.ca/etd/9937
Included in
Cognition and Perception Commons, Neurosciences Commons, Speech and Hearing Science Commons