Linguistic Insights from Language Models - Speaker: Kyle Mahowald

Language models have become adept at generating fluent and grammatically coherent English, prompting fundamental questions about whether their performance can inform our understanding of human language processing. I will describe a few recent experiments from my group: one studying how dative preferences emerge in small language models in the presence of systematically input data, and a second on how we can use mechanistic interpretability techniques to understand linguistic structure. Drawing from these findings and broader theoretical arguments in a recent position piece (Futrell and Mahowald, „How Linguistics Learned to Stop Worrying and Love the Language Models „), I will argue that these kinds of experiments are linguistically informative, in part because of their information-theoretic properties.

Successfully