How contextual are contextual language models? - Speaker: Sebastian Schuster

The interpretation of many sentences depends on context. This is particularly true for pragmatic interpretations, that is, interpretations that go beyond the literal meaning of sentences. For example, consider the sentence „The dog looks awfully happy.“ Out of context, its interpretation primarily concerns some dog’s emotional state. However, if this sentence is given as an answer to „What happened to the turkey?“, listeners will additionally infer that the dog likely ate the turkey. In this talk, I will present three studies that investigate to what extent recently proposed contextual language models are able to take context into account when drawing different pragmatic inferences. I will focus on context-dependent scalar inferences (inferring that „some“ sometimes, but not always, means „some but not all“), presuppositions, and tracking of discourse entities, and explain to what extent models like BERT, DeBERTa, and GPT-3 can interpret context-sensitive sentences.

Sebastian Schuster is a Computing Innovation Fellow at the Center of Data Science and the Department of Linguistics at NYU, mentored by Tal Linzen. His research focuses on computational models of pragmatic utterance interpretation. He is also a core member of the Universal Dependencies initiative, where he leads efforts to make multilingual dependency representations more useful for natural language understanding tasks. He holds an MS degree in Computer Science and a PhD in Linguistics from Stanford University, and a BS in Computer Science from the University of Vienna.

Successfully