Proceedings of the Forty-first Annual Meeting of the Cognitive Science Society
People predict incoming words during online sentence comprehension based on their knowledge of real-world events that is cued by preceding linguistic contexts. We used the visual world paradigm to investigate how event knowledge activated by an agent-verb pair is integrated with perceptual information about the referent that fits the patient role. During the verb time window participants looked significantly more at the referents that are expected given the agent-verb pair. Results are consistent with the assumption that event-based knowledge involves perceptual properties of typical participants. The knowledge activated by the agent is compositionally integrated with knowledge cued by the verb to drive anticipatory eye movements during sentence comprehension based on the expectations associated not only with the incoming word, but also with the visual features of its referent.