Large Language Models are impressive but we still need grounding to explain human cognition - Speaker: Benjamin Bergen

Human cognitive capacities are often explained as resulting from grounded, embodied, or situated learning. But Large Language Models, which only learn on the basis of word co-occurrence statistics, now rival human performance in a variety of tasks that would seem to require these very capacities. This raises the question: is grounding still necessary to explain human cognition? I report on studies addressing three aspects of human cognition: Theory of Mind, Affordances, and Situation Models. In each case, we run both human and LLM participants on the same task and ask how much of the variance in human behavior is explained by the LLMs. As it turns out, in all cases, human behavior is not fully explained by the LLMs. This entails that, at least for now, we need grounding (or more accurately something that goes beyond statistical language learning) to explain these aspects of human cognition. I’ll argue that this general approach is a productive methodology for better understanding the contributions of different kinds of information in human linguistic behavior, and I’ll conclude by asking but not answering a number of questions, like how long grounding will remain necessary, what the right criteria are for an LLM that serves as a proxy for human statistical language learning, and how one could conclusively tell whether LLMs have human-like intelligence.

Successfully