What can language models tell us about the N400? - Speaker: James Michaelov

While the idea that language comprehension involves prediction has been around since at least the 1960s, advances in natural language processing technology have made it more viable than ever to model this computationally. As language models have increased in size and power, performing better at an ever-wider array of natural language tasks, their predictions also increasingly appear to correlate with the N400, a neural signal that has been argued to index the extent to which a word is expected based on its preceding context. The predictions of contemporary large language models can not only be used to model the effects of certain types of stimuli on the amplitude of the N400 response, but in some cases appear to predict single-trial N400 amplitude better than traditional metrics such as cloze probability. With these results in mind, I will discuss how language models can be used to study human language processing, both as a deflationary tool, and as a way to support positive claims about the extent to which humans may use the statistics of language as the basis of prediction in language comprehension.

Successfully