Using and comprehending language in face-to-face conversation - Speaker: Judith Holler

Face-to-face conversational interaction is at the very heart of human sociality and the natural ecological niche in which language has evolved and is acquired. Yet, we still know rather little about how utterances are produced and comprehended in this environment. In this talk, I will focus on how hand gestures, facial and head movements are organised to convey semantic and pragmatic meaning in conversation, as well as on how the presence and timing of these signals impacts utterance comprehension and responding. Specifically, I will present studies based on complementary approaches, which feed into and inform one another. This includes qualitative and quantitative multimodal corpus studies showing that visual signals indeed often occur early, and experimental comprehension studies, which are based on and inspired by the corpus results, implementing controlled manipulations to test for causal effects between visual bodily signals and comprehension processes and mechanisms. These experiments include behavioural and EEG studies, most of them using multimodally animated virtual characters. Together, the findings provide evidence for the hypothesis that visual bodily signals form an integral part of semantic and pragmatic meaning communication in conversational interaction, and that they facilitate language processing, especially due to their timing and the predictive potential they gain through their temporal orchestration.

Successfully