Distributional Semantics and Embodied Meaning: What do Large Language Models have to say? - Speaker: Seana Coulson

Language models like chatGPT motivate an approach to meaning known as distributional semantics, that words mean what they do because of how they’re distributed in language. In this talk I will describe some evidence from my lab that suggests metrics from large language models do a good job of predicting behavioral and neural responses to some aspects of human language. I go on to describe some research that highlights important differences in meaning processing in humans and the ‘understanding’ displayed by language models. Discrepancies are particularly noteworthy in studies of joke comprehension.