Publications

Ryzhova, Margarita; Mayn, Alexandra; Demberg, Vera

What inferences do people actually make upon encountering informationally redundant utterances? An individual differences study Inproceedings

Proceedings of the 45th Annual Meeting of the Cognitive Science Society (CogSci 2023), 45, Sydney, Australia, 2023.

Utterances mentioning a highly predictable event are known to elicit atypicality inferences (Kravtchenko and Demberg, 2015; 2022). In those studies, pragmatic inferences are measured based on typicality ratings. It is assumed that comprehenders notice the redundancy and „repair“ the utterance informativity by inferring that the mentioned event is atypical for the referent, resulting in a lower typicality rating. However, the actual inferences that people make have never been elicited. We extend the original experimental design by asking participants to explain their ratings and administering several individual differences tests. This allows us to test (1) whether low ratings indeed correspond to the assumed inferences (they mostly do, but occasionally participants seem to make the inference but then reject it and give high ratings), and (2) whether the tendency to make atypicality inferences is modulated by cognitive factors. We find that people with higher reasoning abilities are more likely to draw inferences.

@inproceedings{ryzhova_etal_2023_inferences,
title = {What inferences do people actually make upon encountering informationally redundant utterances? An individual differences study},
author = {Margarita Ryzhova and Alexandra Mayn and Vera Demberg},
url = {https://escholarship.org/uc/item/88g7g5z0},
year = {2023},
date = {2023},
booktitle = {Proceedings of the 45th Annual Meeting of the Cognitive Science Society (CogSci 2023)},
address = {Sydney, Australia},
abstract = {Utterances mentioning a highly predictable event are known to elicit atypicality inferences (Kravtchenko and Demberg, 2015; 2022). In those studies, pragmatic inferences are measured based on typicality ratings. It is assumed that comprehenders notice the redundancy and "repair'' the utterance informativity by inferring that the mentioned event is atypical for the referent, resulting in a lower typicality rating. However, the actual inferences that people make have never been elicited. We extend the original experimental design by asking participants to explain their ratings and administering several individual differences tests. This allows us to test (1) whether low ratings indeed correspond to the assumed inferences (they mostly do, but occasionally participants seem to make the inference but then reject it and give high ratings), and (2) whether the tendency to make atypicality inferences is modulated by cognitive factors. We find that people with higher reasoning abilities are more likely to draw inferences.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A8

Ryzhova, Margarita; Skrjanec, Iza; Quach, Nina; Chase, Alice Virginia ; Ellsiepen, Emilia; Demberg, Vera

Word Familiarity Classification From a Single Trial Based on Eye-Movements. A Study in German and English Inproceedings

ETRA '23: Proceedings of the 2023 Symposium on Eye Tracking Research and Applications, 2023.

Identifying processing difficulty during reading due to unfamiliar words has promising applications in automatic text adaptation. We present a classification model that predicts whether a word is (un)known to the reader based on eye-movement measures. We examine German and English data and validate our model on unseen subjects and items achieving a high accuracy in both languages.

@inproceedings{ryzhova-etal-2023,
title = {Word Familiarity Classification From a Single Trial Based on Eye-Movements. A Study in German and English},
author = {Margarita Ryzhova and Iza Skrjanec and Nina Quach and Alice Virginia Chase and Emilia Ellsiepen and Vera Demberg},
url = {https://dl.acm.org/doi/abs/10.1145/3588015.3590118},
doi = {https://doi.org/10.1145/3588015.3590118},
year = {2023},
date = {2023},
booktitle = {ETRA '23: Proceedings of the 2023 Symposium on Eye Tracking Research and Applications},
abstract = {

Identifying processing difficulty during reading due to unfamiliar words has promising applications in automatic text adaptation. We present a classification model that predicts whether a word is (un)known to the reader based on eye-movement measures. We examine German and English data and validate our model on unseen subjects and items achieving a high accuracy in both languages.
},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A8

Skrjanec, Iza; Broy, Frederik Yannick; Demberg, Vera

Expert-adapted language models improve the fit to reading times Inproceedings

Procedia Computer Science, PsyArXiv, 2023.

The concept of surprisal refers to the predictability of a word based on its context. Surprisal is known to be predictive of human processing difficulty and is usually estimated by language models. However, because humans differ in their linguistic experience, they also differ in the actual processing difficulty they experience with a given word or sentence. We investigate whether models that are similar to the linguistic experience and background knowledge of a specific group of humans are better at predicting their reading times than a generic language model. We analyze reading times from the PoTeC corpus (Jäger et al. 2021) of eye movements from biology and physics experts reading biology and physics texts. We find experts read in-domain texts faster than novices, especially domain-specific terms. Next, we train language models adapted to the biology and physics domains and show that surprisal obtained from these specialized models improves the fit to expert reading times above and beyond a generic language model.

 

@inproceedings{skrjanec_broy_demberg_2023,
title = {Expert-adapted language models improve the fit to reading times},
author = {Iza Skrjanec and Frederik Yannick Broy and Vera Demberg},
url = {https://psyarxiv.com/dc8y6},
doi = {https://doi.org/10.31234/osf.io/dc8y6},
year = {2023},
date = {2023},
booktitle = {Procedia Computer Science},
publisher = {PsyArXiv},
abstract = {

The concept of surprisal refers to the predictability of a word based on its context. Surprisal is known to be predictive of human processing difficulty and is usually estimated by language models. However, because humans differ in their linguistic experience, they also differ in the actual processing difficulty they experience with a given word or sentence. We investigate whether models that are similar to the linguistic experience and background knowledge of a specific group of humans are better at predicting their reading times than a generic language model. We analyze reading times from the PoTeC corpus (J{\"a}ger et al. 2021) of eye movements from biology and physics experts reading biology and physics texts. We find experts read in-domain texts faster than novices, especially domain-specific terms. Next, we train language models adapted to the biology and physics domains and show that surprisal obtained from these specialized models improves the fit to expert reading times above and beyond a generic language model.

},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A8

Zhai, Fangzhou; Demberg, Vera; Koller, Alexander

Zero-shot Script Parsing Inproceedings

Proceedings of the 29th International Conference on Computational Linguistics, International Committee on Computational Linguistics, pp. 4049-4060, Gyeongju, Republic of Korea, 2022.

Script knowledge (Schank and Abelson, 1977) is useful for a variety of NLP tasks. However, existing resources only cover a small number of activities, limiting its practical usefulness. In this work, we propose a zero-shot learning approach to script parsing, the task of tagging texts with scenario-specific event and participant types, which enables us to acquire script knowledge without domain-specific annotations. We (1) learn representations of potential event and participant mentions by promoting class consistency according to the annotated data; (2) perform clustering on the event /participant candidates from unannotated texts that belongs to an unseen scenario. The model achieves 68.1/74.4 average F1 for event / participant parsing, respectively, outperforming a previous CRF model that, in contrast, has access to scenario-specific supervision. We also evaluate the model by testing on a different corpus, where it achieved 55.5/54.0 average F1 for event / participant parsing.

@inproceedings{zhai-etal-2022-zero,
title = {Zero-shot Script Parsing},
author = {Fangzhou Zhai and Vera Demberg and Alexander Koller},
url = {https://aclanthology.org/2022.coling-1.356},
year = {2022},
date = {2022},
booktitle = {Proceedings of the 29th International Conference on Computational Linguistics},
pages = {4049-4060},
publisher = {International Committee on Computational Linguistics},
address = {Gyeongju, Republic of Korea},
abstract = {Script knowledge (Schank and Abelson, 1977) is useful for a variety of NLP tasks. However, existing resources only cover a small number of activities, limiting its practical usefulness. In this work, we propose a zero-shot learning approach to script parsing, the task of tagging texts with scenario-specific event and participant types, which enables us to acquire script knowledge without domain-specific annotations. We (1) learn representations of potential event and participant mentions by promoting class consistency according to the annotated data; (2) perform clustering on the event /participant candidates from unannotated texts that belongs to an unseen scenario. The model achieves 68.1/74.4 average F1 for event / participant parsing, respectively, outperforming a previous CRF model that, in contrast, has access to scenario-specific supervision. We also evaluate the model by testing on a different corpus, where it achieved 55.5/54.0 average F1 for event / participant parsing.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Projects:   A3 A8

Successfully