Publications

Abdullah, Badr M.; Avgustinova, Tania; Möbius, Bernd; Klakow, Dietrich

Cross-Domain Adaptation of Spoken Language Identification for Related Languages: The Curious Case of Slavic Languages Inproceedings

Proceedings of Interspeech 2020, pp. 477-481, 2020.

State-of-the-art spoken language identification (LID) systems, which are based on end-to-end deep neural networks, have shown remarkable success not only in discriminating between distant languages but also between closely-related languages or even different spoken varieties of the same language. However, it is still unclear to what extent neural LID models generalize to speech samples with different acoustic conditions due to domain shift. In this paper, we present a set of experiments to investigate the impact of domain mismatch on the performance of neural LID systems for a subset of six Slavic languages across two domains (read speech and radio broadcast) and examine two low-level signal descriptors (spectral and cepstral features) for this task. Our experiments show that (1) out-of-domain speech samples severely hinder the performance of neural LID models, and (2) while both spectral and cepstral features show comparable performance within-domain, spectral features show more robustness under domain mismatch. Moreover, we apply unsupervised domain adaptation to minimize the discrepancy between the two domains in our study. We achieve relative accuracy improvements that range from 9% to 77% depending on the diversity of acoustic conditions in the source domain.

@inproceedings{abdullah_etal_is2020,
title = {Cross-Domain Adaptation of Spoken Language Identification for Related Languages: The Curious Case of Slavic Languages},
author = {Badr M. Abdullah and Tania Avgustinova and Bernd M{\"o}bius and Dietrich Klakow},
url = {https://arxiv.org/abs/2008.00545},
doi = {https://doi.org/10.21437/Interspeech.2020-2930},
year = {2020},
date = {2020},
booktitle = {Proceedings of Interspeech 2020},
pages = {477-481},
abstract = {State-of-the-art spoken language identification (LID) systems, which are based on end-to-end deep neural networks, have shown remarkable success not only in discriminating between distant languages but also between closely-related languages or even different spoken varieties of the same language. However, it is still unclear to what extent neural LID models generalize to speech samples with different acoustic conditions due to domain shift. In this paper, we present a set of experiments to investigate the impact of domain mismatch on the performance of neural LID systems for a subset of six Slavic languages across two domains (read speech and radio broadcast) and examine two low-level signal descriptors (spectral and cepstral features) for this task. Our experiments show that (1) out-of-domain speech samples severely hinder the performance of neural LID models, and (2) while both spectral and cepstral features show comparable performance within-domain, spectral features show more robustness under domain mismatch. Moreover, we apply unsupervised domain adaptation to minimize the discrepancy between the two domains in our study. We achieve relative accuracy improvements that range from 9% to 77% depending on the diversity of acoustic conditions in the source domain.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Projects:   C1 C4

Abdullah, Badr M.; Kudera, Jacek; Avgustinova, Tania; Möbius, Bernd; Klakow, Dietrich

Rediscovering the Slavic Continuum in Representations Emerging from Neural Models of Spoken Language Identification Inproceedings

Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2020), International Committee on Computational Linguistics (ICCL), pp. 128-139, Barcelona, Spain (Online), 2020.

Deep neural networks have been employed for various spoken language recognition tasks, including tasks that are multilingual by definition such as spoken language identification (LID). In this paper, we present a neural model for Slavic language identification in speech signals and analyze its emergent representations to investigate whether they reflect objective measures of language relatedness or non-linguists’ perception of language similarity. While our analysis shows that the language representation space indeed captures language relatedness to a great extent, we find perceptual confusability to be the best predictor of the language representation similarity.

@inproceedings{abdullah_etal_vardial2020,
title = {Rediscovering the Slavic Continuum in Representations Emerging from Neural Models of Spoken Language Identification},
author = {Badr M. Abdullah and Jacek Kudera and Tania Avgustinova and Bernd M{\"o}bius and Dietrich Klakow},
url = {https://www.aclweb.org/anthology/2020.vardial-1.12},
year = {2020},
date = {2020},
booktitle = {Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2020)},
pages = {128-139},
publisher = {International Committee on Computational Linguistics (ICCL)},
address = {Barcelona, Spain (Online)},
abstract = {Deep neural networks have been employed for various spoken language recognition tasks, including tasks that are multilingual by definition such as spoken language identification (LID). In this paper, we present a neural model for Slavic language identification in speech signals and analyze its emergent representations to investigate whether they reflect objective measures of language relatedness or non-linguists’ perception of language similarity. While our analysis shows that the language representation space indeed captures language relatedness to a great extent, we find perceptual confusability to be the best predictor of the language representation similarity.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Projects:   C1 C4

Köhn, Arne; Wichlacz, Julia; Torralba, Álvaro; Höller, Daniel; Hoffmann, Jörg; Koller, Alexander

Generating Instructions at Different Levels of Abstraction Inproceedings

Proceedings of the 28th International Conference on Computational Linguistics, International Committee on Computational Linguistics, pp. 2802-2813, Barcelona, Spain (Online), 2020.

When generating technical instructions, it is often convenient to describe complex objects in the world at different levels of abstraction. A novice user might need an object explained piece by piece, while for an expert, talking about the complex object (e.g. a wall or railing) directly may be more succinct and efficient. We show how to generate building instructions at different levels of abstraction in Minecraft. We introduce the use of hierarchical planning to this end, a method from AI planning which can capture the structure of complex objects neatly. A crowdsourcing evaluation shows that the choice of abstraction level matters to users, and that an abstraction strategy which balances low-level and high-level object descriptions compares favorably to ones which don’t.

@inproceedings{kohn-etal-2020-generating,
title = {Generating Instructions at Different Levels of Abstraction},
author = {Arne K{\"o}hn and Julia Wichlacz and {\'A}lvaro Torralba and Daniel H{\"o}ller and J{\"o}rg Hoffmann and Alexander Koller},
url = {https://aclanthology.org/2020.coling-main.252/},
doi = {https://doi.org/10.18653/v1/2020.coling-main.252},
year = {2020},
date = {2020},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
pages = {2802-2813},
publisher = {International Committee on Computational Linguistics},
address = {Barcelona, Spain (Online)},
abstract = {When generating technical instructions, it is often convenient to describe complex objects in the world at different levels of abstraction. A novice user might need an object explained piece by piece, while for an expert, talking about the complex object (e.g. a wall or railing) directly may be more succinct and efficient. We show how to generate building instructions at different levels of abstraction in Minecraft. We introduce the use of hierarchical planning to this end, a method from AI planning which can capture the structure of complex objects neatly. A crowdsourcing evaluation shows that the choice of abstraction level matters to users, and that an abstraction strategy which balances low-level and high-level object descriptions compares favorably to ones which don't.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A7

Köhn, Arne; Wichlacz, Julia; Schäfer, Christine; Torralba, Álvaro; Hoffmann, Jörg; Koller, Alexander

MC-Saar-Instruct: a Platform for Minecraft Instruction Giving Agents Inproceedings

Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Association for Computational Linguistics, pp. 53-56, 1st virtual meeting, 2020.

We present a comprehensive platform to run human-computer experiments where an agent instructs a human in Minecraft, a 3D blocksworld environment. This platform enables comparisons between different agents by matching users to agents. It performs extensive logging and takes care of all boilerplate, allowing to easily incorporate new agents to evaluate them. Our environment is prepared to evaluate any kind of instruction giving system, recording the interaction and all actions of the user. We provide example architects, a Wizard-of-Oz architect and set-up scripts to automatically download, build and start the platform.

@inproceedings{Hoeller2020IJCAIb,
title = {MC-Saar-Instruct: a Platform for Minecraft Instruction Giving Agents},
author = {Arne K{\"o}hn and Julia Wichlacz and Christine Sch{\"a}fer and {\'A}lvaro Torralba and J{\"o}rg Hoffmann and Alexander Koller},
url = {https://www.aclweb.org/anthology/2020.sigdial-1.7},
year = {2020},
date = {2020},
booktitle = {Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue},
pages = {53-56},
publisher = {Association for Computational Linguistics},
address = {1st virtual meeting},
abstract = {We present a comprehensive platform to run human-computer experiments where an agent instructs a human in Minecraft, a 3D blocksworld environment. This platform enables comparisons between different agents by matching users to agents. It performs extensive logging and takes care of all boilerplate, allowing to easily incorporate new agents to evaluate them. Our environment is prepared to evaluate any kind of instruction giving system, recording the interaction and all actions of the user. We provide example architects, a Wizard-of-Oz architect and set-up scripts to automatically download, build and start the platform.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A7

Höller, Daniel; Bercher, Pascal; Behnke, Gregor; Biundo, Susanne

HTN Plan Repair via Model Transformation Inproceedings

Proceedings of the 43rd German Conference on Artificial Intelligence (KI), Springer, pp. 88-101, 2020.

To make planning feasible, planning models abstract from many details of the modeled system. When executing plans in the actual system, the model might be inaccurate in a critical point, and plan execution may fail. There are two options to handle this case: the previous solution can be modified to address the failure (plan repair), or the planning process can be re-started from the new situation (re-planning). In HTN planning, discarding the plan and generating a new one from the novel situation is not easily possible, because the HTN solution criteria make it necessary to take already executed actions into account. Therefore all approaches to repair plans in the literature are based on specialized algorithms. In this paper, we discuss the problem in detail and introduce a novel approach that makes it possible to use unchanged, off-the-shelf HTN planning systems to repair broken HTN plans. That way, no specialized solvers are needed.

@inproceedings{Hoeller2020KI,
title = {HTN Plan Repair via Model Transformation},
author = {Daniel H{\"o}ller and Pascal Bercher and Gregor Behnke and Susanne Biundo},
url = {https://link.springer.com/chapter/10.1007/978-3-030-58285-2_7},
year = {2020},
date = {2020},
booktitle = {Proceedings of the 43rd German Conference on Artificial Intelligence (KI)},
pages = {88-101},
publisher = {Springer},
abstract = {To make planning feasible, planning models abstract from many details of the modeled system. When executing plans in the actual system, the model might be inaccurate in a critical point, and plan execution may fail. There are two options to handle this case: the previous solution can be modified to address the failure (plan repair), or the planning process can be re-started from the new situation (re-planning). In HTN planning, discarding the plan and generating a new one from the novel situation is not easily possible, because the HTN solution criteria make it necessary to take already executed actions into account. Therefore all approaches to repair plans in the literature are based on specialized algorithms. In this paper, we discuss the problem in detail and introduce a novel approach that makes it possible to use unchanged, off-the-shelf HTN planning systems to repair broken HTN plans. That way, no specialized solvers are needed.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A7

Häuser, Katja; Kray, Jutta; Borovsky, Arielle

Great expectations: Evidence for graded prediction of grammatical gender Inproceedings

CogSci, 2020.

Language processing is predictive in nature. But how do people balance multiple competing options as they predict upcoming meanings? Here, we investigated whether readers generate graded predictions about grammatical gender of nouns. Sentence contexts were manipulated so that they strongly biased people’s expectations towards two or more nouns that had the same grammatical gender (single bias condition), or they biased multiple genders from different grammatical classes (multiple bias condition). Our expectation was that unexpected articles should lead to elevated reading times (RTs) in the single-bias condition when probabilistic expectations towards a particular gender are violated. Indeed, the results showed greater sensitivity among language users towards unexpected articles in the single-bias condition, however, RTs on unexpected gendermarked articles were facilitated, and not slowed. Our data confirm that difficulty in sentence processing is modulated by uncertainty about predicted information, and suggest that readers make graded predictions about grammatical gender.

@inproceedings{haeuser2020great,
title = {Great expectations: Evidence for graded prediction of grammatical gender},
author = {Katja H{\"a}user and Jutta Kray and Arielle Borovsky},
url = {https://link.springer.com/article/10.3758/s13415-015-0340-0},
doi = {https://doi.org/10.3758/s13415-015-0340-0},
year = {2020},
date = {2020},
booktitle = {CogSci},
abstract = {Language processing is predictive in nature. But how do people balance multiple competing options as they predict upcoming meanings? Here, we investigated whether readers generate graded predictions about grammatical gender of nouns. Sentence contexts were manipulated so that they strongly biased people's expectations towards two or more nouns that had the same grammatical gender (single bias condition), or they biased multiple genders from different grammatical classes (multiple bias condition). Our expectation was that unexpected articles should lead to elevated reading times (RTs) in the single-bias condition when probabilistic expectations towards a particular gender are violated. Indeed, the results showed greater sensitivity among language users towards unexpected articles in the single-bias condition, however, RTs on unexpected gendermarked articles were facilitated, and not slowed. Our data confirm that difficulty in sentence processing is modulated by uncertainty about predicted information, and suggest that readers make graded predictions about grammatical gender.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A5

Zhai, Fangzhou; Demberg, Vera; Koller, Alexander

Story Generation with Rich Details Inproceedings

Proceedings of the 28th International Conference on Computational Linguistics (CoLing 2020), International Committee on Computational Linguistics, pp. 2346-2351, Barcelona, Spain (Online), 2020.

Automatically generated stories need to be not only coherent, but also interesting. Apart from realizing a story line, the text also needs to include rich details to engage the readers. We propose a model that features two different generation components: an outliner, which proceeds the main story line to realize global coherence; a detailer, which supplies relevant details to the story in a locally coherent manner. Human evaluations show our model substantially improves the informativeness of generated text while retaining its coherence, outperforming various baselines.

@inproceedings{zhai-etal-2020-story,
title = {Story Generation with Rich Details},
author = {Fangzhou Zhai and Vera Demberg and Alexander Koller},
url = {https://www.aclweb.org/anthology/2020.coling-main.212},
doi = {https://doi.org/10.18653/v1/2020.coling-main.212},
year = {2020},
date = {2020},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics (CoLing 2020)},
pages = {2346-2351},
publisher = {International Committee on Computational Linguistics},
address = {Barcelona, Spain (Online)},
abstract = {Automatically generated stories need to be not only coherent, but also interesting. Apart from realizing a story line, the text also needs to include rich details to engage the readers. We propose a model that features two different generation components: an outliner, which proceeds the main story line to realize global coherence; a detailer, which supplies relevant details to the story in a locally coherent manner. Human evaluations show our model substantially improves the informativeness of generated text while retaining its coherence, outperforming various baselines.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A3

Torabi Asr, Fatemeh; Demberg, Vera

Interpretation of Discourse Connectives Is Probabilistic: Evidence From the Study of But and Although Journal Article

Discourse Processes, 57, pp. 376-399, 2020.

Connectives can facilitate the processing of discourse relations by helping comprehenders to infer the intended coherence relation holding between two text spans. Previous experimental studies have focused on pairs of connectives that are very different from one another to be able to compare and formalize the distinguishing effects of these particles in discourse comprehension. In this article, we compare two connectives, but and although, which overlap in terms of the relations they can signal. We demonstrate in a set of carefully controlled studies that while a connective can be a marker of several discourse relations, it can have a specific fine-grained biasing effect on linguistic inferences and that this bias can be derived (or predicted) from the connectives’ distribution of relations found in production data. The effects that we find speak to the ambiguity of discourse connectives, in general, and the different functions of but and although, in particular. These effects cannot be explained within the earlier accounts of discourse connectives, which propose that each connective has a core meaning or processing instruction. Instead, we here lay out a probabilistic account of connective meaning and interpretation, which is based on the distribution of connectives in production and is supported by our experimental findings.

@article{torabi2020interpretation,
title = {Interpretation of Discourse Connectives Is Probabilistic: Evidence From the Study of But and Although},
author = {Fatemeh Torabi Asr and Vera Demberg},
url = {https://www.tandfonline.com/doi/full/10.1080/0163853X.2019.1700760},
doi = {https://doi.org/10.1080/0163853X.2019.1700760},
year = {2020},
date = {2020-01-27},
journal = {Discourse Processes},
pages = {376-399},
volume = {57},
number = {4},
abstract = {Connectives can facilitate the processing of discourse relations by helping comprehenders to infer the intended coherence relation holding between two text spans. Previous experimental studies have focused on pairs of connectives that are very different from one another to be able to compare and formalize the distinguishing effects of these particles in discourse comprehension. In this article, we compare two connectives, but and although, which overlap in terms of the relations they can signal. We demonstrate in a set of carefully controlled studies that while a connective can be a marker of several discourse relations, it can have a specific fine-grained biasing effect on linguistic inferences and that this bias can be derived (or predicted) from the connectives’ distribution of relations found in production data. The effects that we find speak to the ambiguity of discourse connectives, in general, and the different functions of but and although, in particular. These effects cannot be explained within the earlier accounts of discourse connectives, which propose that each connective has a core meaning or processing instruction. Instead, we here lay out a probabilistic account of connective meaning and interpretation, which is based on the distribution of connectives in production and is supported by our experimental findings.},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   B2

Mecklinger, Axel; Höltje, Gerrit; Ranker, Lika; Eschmann, Kathrin

Unexpected but plausible: The consequences of disconfirmed predictions for episodic memory formation Miscellaneous

CNS 2020 Virtual meeting, Abstract Book, pp. 53 (B79), 2020.

@miscellaneous{Mecklinger_etal2020,
title = {Unexpected but plausible: The consequences of disconfirmed predictions for episodic memory formation},
author = {Axel Mecklinger and Gerrit H{\"o}ltje and Lika Ranker and Kathrin Eschmann},
year = {2020},
date = {2020},
booktitle = {CNS 2020 Virtual meeting, Abstract Book},
pages = {53 (B79)},
pubstate = {published},
type = {miscellaneous}
}

Copy BibTeX to Clipboard

Project:   A6

Chen, Yu; Avgustinova, Tania

Machine Translation from an Intercomprehension Perspective Inproceedings

Proc. Fourth Conference on Machine Translation (WMT), Volume 3: Shared Task Papers, pp. 192-196, Florence, Italy, 2019.

Within the first shared task on machine translation between similar languages, we present our first attempts on Czech to Polish machine translation from an intercomprehension perspective. We propose methods based on the mutual intelligibility of the two languages, taking advantage of their orthographic and phonological similarity, in the hope to improve over our baselines. The translation results are evaluated using BLEU. On this metric, none of our proposals could outperform the baselines on the final test set. The current setups are rather preliminary, and there are several potential improvements we can try in the future.

@inproceedings{csplMT,
title = {Machine Translation from an Intercomprehension Perspective},
author = {Yu Chen and Tania Avgustinova},
url = {https://aclanthology.org/W19-5425},
doi = {https://doi.org/10.18653/v1/W19-5425},
year = {2019},
date = {2019},
booktitle = {Proc. Fourth Conference on Machine Translation (WMT), Volume 3: Shared Task Papers},
pages = {192-196},
address = {Florence, Italy},
abstract = {Within the first shared task on machine translation between similar languages, we present our first attempts on Czech to Polish machine translation from an intercomprehension perspective. We propose methods based on the mutual intelligibility of the two languages, taking advantage of their orthographic and phonological similarity, in the hope to improve over our baselines. The translation results are evaluated using BLEU. On this metric, none of our proposals could outperform the baselines on the final test set. The current setups are rather preliminary, and there are several potential improvements we can try in the future.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   C4

Ankener, Christine

The influence of visual information on word predictability and processing effort PhD Thesis

Saarland University, Saarbruecken, Germany, 2019.

A word’s predictability or surprisal in linguistic context, as determined by cloze probabilities or languagemodels (e.g., Frank, 2013a) is related to processing effort, in that less expected words take more effort to process (e.g., Hale, 2001). This shows how, in purely linguistic contexts, rational approaches have been proven valid to predict and formalise results from language processing studies. However, the surprisal (or predictability) of a word may also be influenced by extra-linguistic factors, such as visual context information, as given in situated language processing. While, in the case of linguistic contexts, it is known that the incrementally processed information affects the mental model (e.g., Zwaan and Radvansky, 1998) at each word in a probabilistic way, no such observations have been made so far in the case of visual context information. Although it has been shown that in the visual world paradigm (VWP), anticipatory eye movements suggest that listeners exploit the scene to predict what will be mentioned next (Altmann and Kamide, 1999), it is so far unclear how visual information actually affects expectations for and processing effort of target words. If visual context effects on word processing effort can be observed, we hypothesise that rational concepts can be extended in order to formalise these effects, hereby making them statistically accessible for language models. In a line of experiments, I hence observe how visual information – which is inherently different from linguistic context, for instance in its non-incremental-at once-accessibility– affects target words. Our findings are a clear and robust demonstration that the non-linguistic context can immediately influence both lexical expectations, and surprisal-based processing effort as assessed by two different on-line measures of effort (a pupillary and an EEG one). Finally, I use surprisal to formalise the measured results and propose an extended formula to take visual information into account.

@phdthesis{Ankener_Diss_2019,
title = {The influence of visual information on word predictability and processing effort},
author = {Christine Ankener},
url = {https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/27905},
doi = {https://doi.org/https://dx.doi.org/10.22028/D291-28451},
year = {2019},
date = {2019},
school = {Saarland University},
address = {Saarbruecken, Germany},
abstract = {A word’s predictability or surprisal in linguistic context, as determined by cloze probabilities or languagemodels (e.g., Frank, 2013a) is related to processing effort, in that less expected words take more effort to process (e.g., Hale, 2001). This shows how, in purely linguistic contexts, rational approaches have been proven valid to predict and formalise results from language processing studies. However, the surprisal (or predictability) of a word may also be influenced by extra-linguistic factors, such as visual context information, as given in situated language processing. While, in the case of linguistic contexts, it is known that the incrementally processed information affects the mental model (e.g., Zwaan and Radvansky, 1998) at each word in a probabilistic way, no such observations have been made so far in the case of visual context information. Although it has been shown that in the visual world paradigm (VWP), anticipatory eye movements suggest that listeners exploit the scene to predict what will be mentioned next (Altmann and Kamide, 1999), it is so far unclear how visual information actually affects expectations for and processing effort of target words. If visual context effects on word processing effort can be observed, we hypothesise that rational concepts can be extended in order to formalise these effects, hereby making them statistically accessible for language models. In a line of experiments, I hence observe how visual information – which is inherently different from linguistic context, for instance in its non-incremental-at once-accessibility– affects target words. Our findings are a clear and robust demonstration that the non-linguistic context can immediately influence both lexical expectations, and surprisal-based processing effort as assessed by two different on-line measures of effort (a pupillary and an EEG one). Finally, I use surprisal to formalise the measured results and propose an extended formula to take visual information into account.},
pubstate = {published},
type = {phdthesis}
}

Copy BibTeX to Clipboard

Project:   A5

Jágrová, Klára

Reading Polish with Czech Eyes: Distance and Surprisal in Quantitative, Qualitative, and Error Analyses of Intelligibility PhD Thesis

Saarland University, Saarbruecken, Germany, 2019.

In CHAPTER I, I first introduce the thesis in the context of the project workflow in section 1. I then summarise the methods and findings from the project publications about the languages in focus. There I also introduce the relevant concepts and terminology viewed in the literature as possible predictors of intercomprehension and processing difficulty. CHAPTER II presents a quantitative (section 4) and a qualitative (section 5) analysis of the results of the cooperative translation experiments. The focus of this thesis – the language pair PL-CS – is explained and the hypotheses are introduced in section 6. The experiment website is introduced in section 7 with an overview over participants, the different experiments conducted and in which section they are discussed. In CHAPTER IV, free translation experiments are discussed in which two different sets of individual word stimuli were presented to Czech readers: (i) Cognates that are transformable with regular PL-CS correspondences (section 12) and (ii) the 100 most frequent PL nouns (section 13). CHAPTER V presents the findings of experiments in which PL NPs in two different linearisation conditions were presented to Czech readers (section 14.1-14.6). A short digression is made when I turn to experiments with PL internationalisms which were presented to German readers (14.7). CHAPTER VI discusses the methods and results of cloze translation experiments with highly predictable target words in sentential context (section 15) and random context with sentences from the cooperative translation experiments (section 16). A final synthesis of the findings, together with an outlook, is provided in CHAPTER VII.


In KAPITEL I stelle ich zunächst die These im Kontext des Projektablaufs in Abschnitt 1 vor. Anschließend fasse ich die Methoden und Erkenntnisse aus den Projektpublikationen zu den untersuchten Sprachen zusammen. Dort stelle ich auch die relevanten Konzepte und die Terminologie vor, die in der Literatur als mögliche Prädiktoren für Interkomprehension und Verarbeitungsschwierigkeiten angesehen werden. KAPITEL II enthält eine quantitative (Abschnitt 4) und eine qualitative (Abschnitt 5) Analyse der Ergebnisse der kooperativen Übersetzungsexperimente. Der Fokus dieser Arbeit – das Sprachenpaar PL-CS – wird erläutert und die Hypothesen werden in Abschnitt 6 vorgestellt. Die Experiment-Website wird in Abschnitt 7 mit einer Übersicht über die Teilnehmer, die verschiedenen durchgeführten Experimente und die Abschnitte, in denen sie besprochen werden, vorgestellt. In KAPITEL IV werden Experimente zur freien Übersetzung besprochen, bei denen tschechischen Lesern zwei verschiedene Sätze einzelner Wortstimuli präsentiert wurden: (i) Kognaten, die mit regulären PL-CS-Korrespondenzen umgewandelt werden können (Abschnitt 12) und (ii) die 100 häufigsten PL-Substantive (Abschnitt 13). KAPITEL V stellt die Ergebnisse von Experimenten vor, in denen tschechischen Lesern PL-NP in zwei verschiedenen Linearisierungszuständen präsentiert wurden (Abschnitt 14.1-14.6). Einen kurzen Exkurs mache ich, wenn ich mich den Experimenten mit PL-Internationalismen zuwende, die deutschen Lesern präsentiert wurden (14.7). KAPITEL VI erörtert die Methoden und Ergebnisse von Lückentexten mit hochgradig vorhersehbaren Zielwörtern im Satzkontext (Abschnitt 15) und Zufallskontext mit Sätzen aus den kooperativen Übersetzungsexperimenten (Abschnitt 16). Eine abschließende Synthese der Ergebnisse und ein Ausblick finden sich in KAPITEL VII.

@phdthesis{Jagrova_Diss_2019,
title = {Reading Polish with Czech Eyes: Distance and Surprisal in Quantitative, Qualitative, and Error Analyses of Intelligibility},
author = {Kl{\'a}ra J{\'a}grov{\'a}},
url = {https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/32995},
doi = {https://doi.org/10.22028/D291-32708},
year = {2019},
date = {2019},
school = {Saarland University},
address = {Saarbruecken, Germany},
abstract = {In CHAPTER I, I first introduce the thesis in the context of the project workflow in section 1. I then summarise the methods and findings from the project publications about the languages in focus. There I also introduce the relevant concepts and terminology viewed in the literature as possible predictors of intercomprehension and processing difficulty. CHAPTER II presents a quantitative (section 4) and a qualitative (section 5) analysis of the results of the cooperative translation experiments. The focus of this thesis – the language pair PL-CS – is explained and the hypotheses are introduced in section 6. The experiment website is introduced in section 7 with an overview over participants, the different experiments conducted and in which section they are discussed. In CHAPTER IV, free translation experiments are discussed in which two different sets of individual word stimuli were presented to Czech readers: (i) Cognates that are transformable with regular PL-CS correspondences (section 12) and (ii) the 100 most frequent PL nouns (section 13). CHAPTER V presents the findings of experiments in which PL NPs in two different linearisation conditions were presented to Czech readers (section 14.1-14.6). A short digression is made when I turn to experiments with PL internationalisms which were presented to German readers (14.7). CHAPTER VI discusses the methods and results of cloze translation experiments with highly predictable target words in sentential context (section 15) and random context with sentences from the cooperative translation experiments (section 16). A final synthesis of the findings, together with an outlook, is provided in CHAPTER VII.


In KAPITEL I stelle ich zun{\"a}chst die These im Kontext des Projektablaufs in Abschnitt 1 vor. Anschlie{\ss}end fasse ich die Methoden und Erkenntnisse aus den Projektpublikationen zu den untersuchten Sprachen zusammen. Dort stelle ich auch die relevanten Konzepte und die Terminologie vor, die in der Literatur als m{\"o}gliche Pr{\"a}diktoren f{\"u}r Interkomprehension und Verarbeitungsschwierigkeiten angesehen werden. KAPITEL II enth{\"a}lt eine quantitative (Abschnitt 4) und eine qualitative (Abschnitt 5) Analyse der Ergebnisse der kooperativen {\"U}bersetzungsexperimente. Der Fokus dieser Arbeit - das Sprachenpaar PL-CS - wird erl{\"a}utert und die Hypothesen werden in Abschnitt 6 vorgestellt. Die Experiment-Website wird in Abschnitt 7 mit einer {\"U}bersicht {\"u}ber die Teilnehmer, die verschiedenen durchgef{\"u}hrten Experimente und die Abschnitte, in denen sie besprochen werden, vorgestellt. In KAPITEL IV werden Experimente zur freien {\"U}bersetzung besprochen, bei denen tschechischen Lesern zwei verschiedene S{\"a}tze einzelner Wortstimuli pr{\"a}sentiert wurden: (i) Kognaten, die mit regul{\"a}ren PL-CS-Korrespondenzen umgewandelt werden k{\"o}nnen (Abschnitt 12) und (ii) die 100 h{\"a}ufigsten PL-Substantive (Abschnitt 13). KAPITEL V stellt die Ergebnisse von Experimenten vor, in denen tschechischen Lesern PL-NP in zwei verschiedenen Linearisierungszust{\"a}nden pr{\"a}sentiert wurden (Abschnitt 14.1-14.6). Einen kurzen Exkurs mache ich, wenn ich mich den Experimenten mit PL-Internationalismen zuwende, die deutschen Lesern pr{\"a}sentiert wurden (14.7). KAPITEL VI er{\"o}rtert die Methoden und Ergebnisse von L{\"u}ckentexten mit hochgradig vorhersehbaren Zielw{\"o}rtern im Satzkontext (Abschnitt 15) und Zufallskontext mit S{\"a}tzen aus den kooperativen {\"U}bersetzungsexperimenten (Abschnitt 16). Eine abschlie{\ss}ende Synthese der Ergebnisse und ein Ausblick finden sich in KAPITEL VII.},
pubstate = {published},
type = {phdthesis}
}

Copy BibTeX to Clipboard

Project:   C4

Venhuizen, Noortje; Crocker, Matthew W.; Brouwer, Harm

Semantic Entropy in Language Comprehension Journal Article

Entropy, 21, pp. 1159, 2019.

Language is processed on a more or less word-by-word basis, and the processing difficulty induced by each word is affected by our prior linguistic experience as well as our general knowledge about the world. Surprisal and entropy reduction have been independently proposed as linking theories between word processing difficulty and probabilistic language models. Extant models, however, are typically limited to capturing linguistic experience and hence cannot account for the influence of world knowledge. A recent comprehension model by Venhuizen, Crocker, and Brouwer (2019, Discourse Processes) improves upon this situation by instantiating a comprehension-centric metric of surprisal that integrates linguistic experience and world knowledge at the level of interpretation and combines them in determining online expectations. Here, we extend this work by deriving a comprehension-centric metric of entropy reduction from this model. In contrast to previous work, which has found that surprisal and entropy reduction are not easily dissociated, we do find a clear dissociation in our model. While both surprisal and entropy reduction derive from the same cognitive process – the word-by-word updating of the unfolding interpretation – they reflect different aspects of this process: state-by-state expectation (surprisal) versus end-state confirmation (entropy reduction).

@article{Venhuizen2019,
title = {Semantic Entropy in Language Comprehension},
author = {Noortje Venhuizen and Matthew W. Crocker and Harm Brouwer},
url = {https://www.mdpi.com/1099-4300/21/12/1159},
doi = {https://doi.org/10.3390/e21121159},
year = {2019},
date = {2019-11-27},
journal = {Entropy},
pages = {1159},
volume = {21},
number = {12},
abstract = {Language is processed on a more or less word-by-word basis, and the processing difficulty induced by each word is affected by our prior linguistic experience as well as our general knowledge about the world. Surprisal and entropy reduction have been independently proposed as linking theories between word processing difficulty and probabilistic language models. Extant models, however, are typically limited to capturing linguistic experience and hence cannot account for the influence of world knowledge. A recent comprehension model by Venhuizen, Crocker, and Brouwer (2019, Discourse Processes) improves upon this situation by instantiating a comprehension-centric metric of surprisal that integrates linguistic experience and world knowledge at the level of interpretation and combines them in determining online expectations. Here, we extend this work by deriving a comprehension-centric metric of entropy reduction from this model. In contrast to previous work, which has found that surprisal and entropy reduction are not easily dissociated, we do find a clear dissociation in our model. While both surprisal and entropy reduction derive from the same cognitive process - the word-by-word updating of the unfolding interpretation - they reflect different aspects of this process: state-by-state expectation (surprisal) versus end-state confirmation (entropy reduction).},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   A1

Shi, Wei; Demberg, Vera

Next Sentence Prediction helps Implicit Discourse Relation Classification within and across Domains Inproceedings

Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Association for Computational Linguistics, pp. 5789-5795, Hong Kong, China, 2019.

Implicit discourse relation classification is one of the most difficult tasks in discourse parsing. Previous studies have generally focused on extracting better representations of the relational arguments. In order to solve the task, it is however additionally necessary to capture what events are expected to cause or follow each other. Current discourse relation classifiers fall short in this respect. We here show that this shortcoming can be effectively addressed by using the bidirectional encoder representation from transformers (BERT) proposed by Devlin et al. (2019), which were trained on a nextsentence prediction task, and thus encode a representation of likely next sentences. The BERT-based model outperforms the current state of the art in 11-way classification by 8% points on the standard PDTB dataset. Our experiments also demonstrate that the model can be successfully ported to other domains: on the BioDRB dataset, the model outperforms
the state of the art system around 15% points.

@inproceedings{shi-demberg-2019-next,
title = {Next Sentence Prediction helps Implicit Discourse Relation Classification within and across Domains},
author = {Wei Shi and Vera Demberg},
url = {https://www.aclweb.org/anthology/D19-1586},
doi = {https://doi.org/10.18653/v1/D19-1586},
year = {2019},
date = {2019-11-03},
booktitle = {Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},
pages = {5789-5795},
publisher = {Association for Computational Linguistics},
address = {Hong Kong, China},
abstract = {Implicit discourse relation classification is one of the most difficult tasks in discourse parsing. Previous studies have generally focused on extracting better representations of the relational arguments. In order to solve the task, it is however additionally necessary to capture what events are expected to cause or follow each other. Current discourse relation classifiers fall short in this respect. We here show that this shortcoming can be effectively addressed by using the bidirectional encoder representation from transformers (BERT) proposed by Devlin et al. (2019), which were trained on a nextsentence prediction task, and thus encode a representation of likely next sentences. The BERT-based model outperforms the current state of the art in 11-way classification by 8% points on the standard PDTB dataset. Our experiments also demonstrate that the model can be successfully ported to other domains: on the BioDRB dataset, the model outperforms the state of the art system around 15% points.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B2

Ortmann, Katrin; Roussel, Adam; Dipper, Stefanie

Evaluating Off-the-Shelf NLP Tools for German Inproceedings

Proceedings of the Conference on Natural Language Processing (KONVENS), pp. 212-222, Erlangen, Germany, 2019.

It is not always easy to keep track of what toolsarecurrentlyavailableforaparticular annotation task, nor is it obvious how the provided models will perform on a given dataset. Inthiscontribution,weprovidean overview of the tools available for the automatic annotation of German-language text. We evaluate fifteen free and open source NLP tools for the linguistic annotation of German, looking at the fundamental NLP tasks of sentence segmentation, tokenization, POS tagging, morphological analysis, lemmatization, and dependency parsing. To get an idea of how the systems’ performance will generalize to various domains, we compiled our test corpus from various non-standard domains. All of the systems in our study are evaluated not only with respect to accuracy, but also the computational resources required.

@inproceedings{Ortmann2019b,
title = {Evaluating Off-the-Shelf NLP Tools for German},
author = {Katrin Ortmann and Adam Roussel and Stefanie Dipper},
url = {https://github.com/rubcompling/konvens2019},
year = {2019},
date = {2019},
booktitle = {Proceedings of the Conference on Natural Language Processing (KONVENS)},
pages = {212-222},
address = {Erlangen, Germany},
abstract = {It is not always easy to keep track of what toolsarecurrentlyavailableforaparticular annotation task, nor is it obvious how the provided models will perform on a given dataset. Inthiscontribution,weprovidean overview of the tools available for the automatic annotation of German-language text. We evaluate fifteen free and open source NLP tools for the linguistic annotation of German, looking at the fundamental NLP tasks of sentence segmentation, tokenization, POS tagging, morphological analysis, lemmatization, and dependency parsing. To get an idea of how the systems’ performance will generalize to various domains, we compiled our test corpus from various non-standard domains. All of the systems in our study are evaluated not only with respect to accuracy, but also the computational resources required.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   C6

Jágrová, Klára; Stenger, Irina; Telus, Magdalena

Slavische Interkomprehension im 5-Sprachen-Kurs – Dokumentation eines Semesters Journal Article

Polnisch in Deutschland. Zeitschrift der Bundesvereinigung der Polnischlehrkräfte. Sondernummer: Emil Krebs und die Mehrsprachigkeit in Europa, pp. 122–133, 2019.

@article{Jágrová2019,
title = {Slavische Interkomprehension im 5-Sprachen-Kurs – Dokumentation eines Semesters},
author = {Kl{\'a}ra J{\'a}grov{\'a} and Irina Stenger and Magdalena Telus},
year = {2019},
date = {2019},
journal = {Polnisch in Deutschland. Zeitschrift der Bundesvereinigung der Polnischlehrkr{\"a}fte. Sondernummer: Emil Krebs und die Mehrsprachigkeit in Europa},
pages = {122–133},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   C4

Stenger, Irina

Zur Rolle der Orthographie in der slavischen Interkomprehension mit besonderem Fokus auf die kyrillische Schrift PhD Thesis

Saarland University, Saarbrücken, Germany, 2019, ISBN 978-3-86223-283-3.

Die slavischen Sprachen stellen einen bedeutenden indogermanischen Sprachzweig dar. Es stellt sich die Frage, inwieweit sich Sprecher verschiedener slavischer Sprachen interkomprehensiv verständigen können. Unter Interkomprehension wird die Kommunikationsfähigkeit von Sprechern verwandter Sprachen verstanden, wobei sich jeder Sprecher seiner Sprache bedient. Die vorliegende Arbeit untersucht die orthographische Verständlichkeit slavischer Sprachen mit kyrillischer Schrift im interkomprehensiven Lesen. Sechs ost- und südslavische Sprachen – Bulgarisch, Makedonisch, Russisch, Serbisch, Ukrainisch und Weißrussisch – werden im Hinblick auf orthographische Ähnlichkeiten und Unterschiede miteinander verglichen und statistisch analysiert. Der Fokus der empirischen Untersuchung liegt auf der Erkennung einzelner Kognaten mit diachronisch motivierten orthographischen Korrespondenzen in ost- und südslavischen Sprachen, ausgehend vom Russischen. Die in dieser Arbeit vorgestellten Methoden und erzielten Ergebnisse stellen einen empirischen Beitrag zur slavischen Interkomprehensionsforschung und Interkomrepehensionsdidaktik dar.

@phdthesis{Stenger_diss_2019,
title = {Zur Rolle der Orthographie in der slavischen Interkomprehension mit besonderem Fokus auf die kyrillische Schrift},
author = {Irina Stenger},
year = {2019},
date = {2019},
school = {Saarland University},
address = {Saarbr{\"u}cken, Germany},
abstract = {Die slavischen Sprachen stellen einen bedeutenden indogermanischen Sprachzweig dar. Es stellt sich die Frage, inwieweit sich Sprecher verschiedener slavischer Sprachen interkomprehensiv verst{\"a}ndigen k{\"o}nnen. Unter Interkomprehension wird die Kommunikationsf{\"a}higkeit von Sprechern verwandter Sprachen verstanden, wobei sich jeder Sprecher seiner Sprache bedient. Die vorliegende Arbeit untersucht die orthographische Verst{\"a}ndlichkeit slavischer Sprachen mit kyrillischer Schrift im interkomprehensiven Lesen. Sechs ost- und s{\"u}dslavische Sprachen - Bulgarisch, Makedonisch, Russisch, Serbisch, Ukrainisch und Wei{\ss}russisch - werden im Hinblick auf orthographische {\"A}hnlichkeiten und Unterschiede miteinander verglichen und statistisch analysiert. Der Fokus der empirischen Untersuchung liegt auf der Erkennung einzelner Kognaten mit diachronisch motivierten orthographischen Korrespondenzen in ost- und s{\"u}dslavischen Sprachen, ausgehend vom Russischen. Die in dieser Arbeit vorgestellten Methoden und erzielten Ergebnisse stellen einen empirischen Beitrag zur slavischen Interkomprehensionsforschung und Interkomrepehensionsdidaktik dar.},
pubstate = {published},
type = {phdthesis}
}

Copy BibTeX to Clipboard

Project:   C4

Stenger, Irina; Avgustinova, Tania; Belousov, Konstantin I.; Baranov, Dmitrij A.; Erofeeva, Elena V.

Interaction of linguistic and socio-cognitive factors in receptive multilingualism [Vzaimodejstvie lingvističeskich i sociokognitivnych parametrov pri receptivnom mul’tilingvisme] Inproceedings

25th International Conference on Computational Linguistics and Intellectual Technologies (Dialogue 2019), Moscow, Russia, 2019.

@inproceedings{Stenger2019,
title = {Interaction of linguistic and socio-cognitive factors in receptive multilingualism [Vzaimodejstvie lingvisti{\v{c}eskich i sociokognitivnych parametrov pri receptivnom mul’tilingvisme]},
author = {Irina Stenger and Tania Avgustinova and Konstantin I. Belousov and Dmitrij A. Baranov and Elena V. Erofeeva},
url = {http://www.dialog-21.ru/digest/2019/online/},
year = {2019},
date = {2019},
booktitle = {25th International Conference on Computational Linguistics and Intellectual Technologies (Dialogue 2019)},
address = {Moscow, Russia},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   C4

Calvillo, Jesús

Connectionist language production : distributed representations and the uniform information density hypothesis PhD Thesis

Saarland University, Saarbruecken, Germany, 2019.

This dissertation approaches the task of modeling human sentence production from a connectionist point of view, and using distributed semantic representations. The main questions it tries to address are: (i) whether the distributed semantic representations defined by Frank et al. (2009) are suitable to model sentence production using artificial neural networks, (ii) the behavior and internal mechanism of a model that uses this representations and recurrent neural networks, and (iii) a mechanistic account of the Uniform Information Density Hypothesis (UID; Jaeger, 2006; Levy and Jaeger, 2007). Regarding the first point, the semantic representations of Frank et al. (2009), called situation vectors are points in a vector space where each vector contains information about the observations in which an event and a corresponding sentence are true. These representations have been successfully used to model language comprehension (e.g., Frank et al., 2009; Venhuizen et al., 2018). During the construction of these vectors, however, a dimensionality reduction process introduces some loss of information, which causes some aspects to be no longer recognizable, reducing the performance of a model that utilizes them. In order to address this issue, belief vectors are introduced, which could be regarded as an alternative way to obtain semantic representations of manageable dimensionality. These two types of representations (situation and belief vectors) are evaluated using them as input for a sentence production model that implements an extension of a Simple Recurrent Neural network (Elman, 1990). This model was tested under different conditions corresponding to different levels of systematicity, which is the ability of a model to generalize from a set of known items to a set of novel ones. Systematicity is an essential attribute that a model of sentence processing has to possess, considering that the number of sentences that can be generated for a given language is infinite, and therefore it is not feasible to memorize all possible message-sentence pairs. The results showed that the model was able to generalize with a very high performance in all test conditions, demonstrating a systematic behavior. Furthermore, the errors that it elicited were related to very similar semantic representations, reflecting the speech error literature, which states that speech errors involve elements with semantic or phonological similarity. This result further demonstrates the systematic behavior of the model, as it processes similar semantic representations in a similar way, even if they are new to the model. Regarding the second point, the sentence production model was analyzed in two different ways. First, by looking at the sentences it produces, including the errors elicited, highlighting difficulties and preferences of the model. The results revealed that the model learns the syntactic patterns of the language, reflecting its statistical nature, and that its main difficulty is related to very similar semantic representations, sometimes producing unintended sentences that are however very semantically related to the intended ones. Second, the connection weights and activation patterns of the model were also analyzed, reaching an algorithmic account of the internal processing of the model. According to this, the input semantic representation activates the words that are related to its content, giving an idea of their order by providing relatively more activation to words that are likely to appear early in the sentence. Then, at each time step the word that was previously produced activates syntactic and semantic constraints on the next word productions, while the context units of the recurrence preserve information through time, allowing the model to enforce long distance dependencies. We propose that these results can inform about the internal processing of models with similar architecture. Regarding the third point, an extension of the model is proposed with the goal of modeling UID. According to UID, language production is an efficient process affected by a tendency to produce linguistic units distributing the information as uniformly as possible and close to the capacity of the communication channel, given the encoding possibilities of the language, thus optimizing the amount of information that is transmitted per time unit. This extension of the model approaches UID by balancing two different production strategies: one where the model produces the word with highest probability given the semantics and the previously produced words, and another one where the model produces the word that would minimize the sentence length given the semantic representation and the previously produced words. By combining these two strategies, the model was able to produce sentences with different levels of information density and uniformity, providing a first step to model UID at the algorithmic level of analysis. In sum, the results show that the distributed semantic representations of Frank et al. (2009) can be used to model sentence production, exhibiting systematicity. Moreover, an algorithmic account of the internal behavior of the model was reached, with the potential to generalize to other models with similar architecture. Finally, a model of UID is presented, highlighting some important aspects about UID that need to be addressed in order to go from the formulation of UID at the computational level of analysis to a mechanistic account at the algorithmic level.

@phdthesis{Calvillo_diss_2019,
title = {Connectionist language production : distributed representations and the uniform information density hypothesis},
author = {Jesús Calvillo},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:291--ds-279340},
doi = {https://doi.org/http://dx.doi.org/10.22028/D291-27934},
year = {2019},
date = {2019},
school = {Saarland University},
address = {Saarbruecken, Germany},
abstract = {This dissertation approaches the task of modeling human sentence production from a connectionist point of view, and using distributed semantic representations. The main questions it tries to address are: (i) whether the distributed semantic representations defined by Frank et al. (2009) are suitable to model sentence production using artificial neural networks, (ii) the behavior and internal mechanism of a model that uses this representations and recurrent neural networks, and (iii) a mechanistic account of the Uniform Information Density Hypothesis (UID; Jaeger, 2006; Levy and Jaeger, 2007). Regarding the first point, the semantic representations of Frank et al. (2009), called situation vectors are points in a vector space where each vector contains information about the observations in which an event and a corresponding sentence are true. These representations have been successfully used to model language comprehension (e.g., Frank et al., 2009; Venhuizen et al., 2018). During the construction of these vectors, however, a dimensionality reduction process introduces some loss of information, which causes some aspects to be no longer recognizable, reducing the performance of a model that utilizes them. In order to address this issue, belief vectors are introduced, which could be regarded as an alternative way to obtain semantic representations of manageable dimensionality. These two types of representations (situation and belief vectors) are evaluated using them as input for a sentence production model that implements an extension of a Simple Recurrent Neural network (Elman, 1990). This model was tested under different conditions corresponding to different levels of systematicity, which is the ability of a model to generalize from a set of known items to a set of novel ones. Systematicity is an essential attribute that a model of sentence processing has to possess, considering that the number of sentences that can be generated for a given language is infinite, and therefore it is not feasible to memorize all possible message-sentence pairs. The results showed that the model was able to generalize with a very high performance in all test conditions, demonstrating a systematic behavior. Furthermore, the errors that it elicited were related to very similar semantic representations, reflecting the speech error literature, which states that speech errors involve elements with semantic or phonological similarity. This result further demonstrates the systematic behavior of the model, as it processes similar semantic representations in a similar way, even if they are new to the model. Regarding the second point, the sentence production model was analyzed in two different ways. First, by looking at the sentences it produces, including the errors elicited, highlighting difficulties and preferences of the model. The results revealed that the model learns the syntactic patterns of the language, reflecting its statistical nature, and that its main difficulty is related to very similar semantic representations, sometimes producing unintended sentences that are however very semantically related to the intended ones. Second, the connection weights and activation patterns of the model were also analyzed, reaching an algorithmic account of the internal processing of the model. According to this, the input semantic representation activates the words that are related to its content, giving an idea of their order by providing relatively more activation to words that are likely to appear early in the sentence. Then, at each time step the word that was previously produced activates syntactic and semantic constraints on the next word productions, while the context units of the recurrence preserve information through time, allowing the model to enforce long distance dependencies. We propose that these results can inform about the internal processing of models with similar architecture. Regarding the third point, an extension of the model is proposed with the goal of modeling UID. According to UID, language production is an efficient process affected by a tendency to produce linguistic units distributing the information as uniformly as possible and close to the capacity of the communication channel, given the encoding possibilities of the language, thus optimizing the amount of information that is transmitted per time unit. This extension of the model approaches UID by balancing two different production strategies: one where the model produces the word with highest probability given the semantics and the previously produced words, and another one where the model produces the word that would minimize the sentence length given the semantic representation and the previously produced words. By combining these two strategies, the model was able to produce sentences with different levels of information density and uniformity, providing a first step to model UID at the algorithmic level of analysis. In sum, the results show that the distributed semantic representations of Frank et al. (2009) can be used to model sentence production, exhibiting systematicity. Moreover, an algorithmic account of the internal behavior of the model was reached, with the potential to generalize to other models with similar architecture. Finally, a model of UID is presented, highlighting some important aspects about UID that need to be addressed in order to go from the formulation of UID at the computational level of analysis to a mechanistic account at the algorithmic level.},
pubstate = {published},
type = {phdthesis}
}

Copy BibTeX to Clipboard

Project:   C3

Jachmann, Torsten; Drenhaus, Heiner; Staudte, Maria; Crocker, Matthew W.

Influence of speakers’ gaze on situated language comprehension: Evidence from Event-Related Potentials Journal Article

Brain and cognition, 135, Elsevier, pp. 103571, 2019.

Behavioral studies have shown that speaker gaze to objects in a co-present scene can influence listeners’ sentence comprehension. To gain deeper insight into the mechanisms involved in gaze processing and integration, we conducted two ERP experiments (N = 30, Age: [18, 32] and [19, 33] respectively). Participants watched a centrally positioned face performing gaze actions aligned to utterances comparing two out of three displayed objects. They were asked to judge whether the sentence was true given the provided scene. We manipulated the second gaze cue to be either Congruent (baseline), Incongruent or Averted (Exp1)/Mutual (Exp2). When speaker gaze is used to form lexical expectations about upcoming referents, we found an attenuated N200 when phonological information confirms these expectations (Congruent). Similarly, we observed attenuated N400 amplitudes when gaze-cued expectations (Congruent) facilitate lexical retrieval. Crucially, only a violation of gaze-cued lexical expectations (Incongruent) leads to a P600 effect, suggesting the necessity to revise the mental representation of the situation. Our results support the hypothesis that gaze is utilized above and beyond simply enhancing a cued object’s prominence. Rather, gaze to objects leads to their integration into the mental representation of the situation before they are mentioned.

@article{Jachmann2019b,
title = {Influence of speakers’ gaze on situated language comprehension: Evidence from Event-Related Potentials},
author = {Torsten Jachmann and Heiner Drenhaus and Maria Staudte and Matthew W. Crocker},
url = {https://www.sciencedirect.com/science/article/pii/S0278262619300120},
doi = {https://doi.org/10.1016/j.bandc.2019.05.009},
year = {2019},
date = {2019},
journal = {Brain and cognition},
pages = {103571},
publisher = {Elsevier},
volume = {135},
abstract = {Behavioral studies have shown that speaker gaze to objects in a co-present scene can influence listeners’ sentence comprehension. To gain deeper insight into the mechanisms involved in gaze processing and integration, we conducted two ERP experiments (N = 30, Age: [18, 32] and [19, 33] respectively). Participants watched a centrally positioned face performing gaze actions aligned to utterances comparing two out of three displayed objects. They were asked to judge whether the sentence was true given the provided scene. We manipulated the second gaze cue to be either Congruent (baseline), Incongruent or Averted (Exp1)/Mutual (Exp2). When speaker gaze is used to form lexical expectations about upcoming referents, we found an attenuated N200 when phonological information confirms these expectations (Congruent). Similarly, we observed attenuated N400 amplitudes when gaze-cued expectations (Congruent) facilitate lexical retrieval. Crucially, only a violation of gaze-cued lexical expectations (Incongruent) leads to a P600 effect, suggesting the necessity to revise the mental representation of the situation. Our results support the hypothesis that gaze is utilized above and beyond simply enhancing a cued object’s prominence. Rather, gaze to objects leads to their integration into the mental representation of the situation before they are mentioned.},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Projects:   A5 C3

Successfully