The inﬂuence of visual information on word predictability and processing effort
Saarland University, Saarbruecken, Germany, 2019.
A word’s predictability or surprisal in linguistic context, as determined by cloze probabilities or languagemodels (e.g., Frank, 2013a) is related to processing effort, in that less expected words take more effort to process (e.g., Hale, 2001). This shows how, in purely linguistic contexts, rational approaches have been proven valid to predict and formalise results from language processing studies. However, the surprisal (or predictability) of a word may also be inﬂuenced by extra-linguistic factors, such as visual context information, as given in situated language processing. While, in the case of linguistic contexts, it is known that the incrementally processed information affects the mental model (e.g., Zwaan and Radvansky, 1998) at each word in a probabilistic way, no such observations have been made so far in the case of visual context information. Although it has been shown that in the visual world paradigm (VWP), anticipatory eye movements suggest that listeners exploit the scene to predict what will be mentioned next (Altmann and Kamide, 1999), it is so far unclear how visual information actually affects expectations for and processing effort of target words. If visual context effects on word processing effort can be observed, we hypothesise that rational concepts can be extended in order to formalise these effects, hereby making them statistically accessible for language models. In a line of experiments, I hence observe how visual information – which is inherently different from linguistic context, for instance in its non-incremental-at once-accessibility– affects target words. Our ﬁndings are a clear and robust demonstration that the non-linguistic context can immediately inﬂuence both lexical expectations, and surprisal-based processing effort as assessed by two different on-line measures of effort (a pupillary and an EEG one). Finally, I use surprisal to formalise the measured results and propose an extended formula to take visual information into account.