Cognitive & Language Sciences in the age of Large Language Models - Speaker: Michael Franke

Powerful Large Language Models, like chatGPT, produce fluent, grammatically correct, interpretable and often relevant and useful language. This raises a brutal question: What do we need linguistic theory for if all of this can be achieved without? In my talk, I will dissect the most common reaction from theorists, namely the argument that LLMs are not explanatory models in the relevant sense. I attempt to better delineate what it is that makes a model explanatory. I then proceed to present novel work-in-progress that attempts to build explanatory, hybrid models that combine theory-driven probabilistic models of pragmatic language use and interpretation with the open-endedness of LLMs.