Where Do I Know That? A Distributed Multimodal Model of Semantic Knowledge
As computers have grown more and more powerful, computational modeling has become an increasingly valuable tool for evaluating real world findings. Likewise, brain imaging has become increasingly powerful as is evidenced by recent fMRI findings that support the exciting possibility that semantic memory is segregated by modality in the brain (Goldberg et al., 2006b). The present study utilizes connectionist modeling to put the distributed multi-modal framework of semantic memory to the test, and represents the next step forward in the line of sensory-functional models. This model, based around the McRae et al. (2005) feature production norms, includes individual implementations of each modality: visual colour, visual motion, visual form and surface, olfactory-gustatory, encyclopedic, tactile, auditory, and functional. A cross-modal convergence zone (Hub), visual decoding region, and abstracted implementations of wordforms and images are also included to ultimately simulate picture naming. Focal lesions are simulated in each semantic modality, successfully recreating various category-specific deficits, many of which have been reported in patient case studies. Categories are expanded from a living-nonliving dichotomy to include animals, artifacts (tools, utensils, containers, and clothing), fruits and vegetables, and musical instruments. Hub damage recreates semantic dementia with an additional slight but significant impairment of animals over artifacts.
Stubbs, K. M. (2015). Where Do I Know That? A Distributed Multimodal Model of Semantic Knowledge. Western Undergraduate Psychology Journal, 3 (1). Retrieved from https://ir.lib.uwo.ca/wupj/vol3/iss1/3