Publications

Lemke, Tyll Robin

Experimental investigations on the syntax and usage of fragments Miscellaneous

Experimental investigations on the syntax and usage of fragments, Open Germanic Linguistics, Language Science Press, Berlin, 2021.

This book investigates the syntax and usage of fragments (Morgan 1973), apparently subsentential utterances like „A coffee, please!“ which fulfill the same communicative function as the corresponding full sentence „I’d like to have a coffee, please!“. Even though such utterances are frequently used, they challenge the central role that has been attributed to the notion of sentence in linguistic theory, particularly from a semantic perspective.

The first part of the book is dedicated to the syntactic analysis of fragments, which is investigated with experimental methods. Currently there are several competing theoretical analyses of fragments, which rely almost only on introspective data. The experiments presented in this book constitute a first systematic evaluation of some of their crucial predictions and, taken together, support an in situ ellipsis account of fragments, as has been suggested by Reich (2007).

The second part of the book addresses the questions of why fragments are used at all, and under which circumstances they are preferred over complete sentences. Syntactic accounts impose licensing conditions on fragments, but they do not explain, why fragments are sometimes (dis)preferred provided that their usage is licensed. This book proposes an information-theoretic account of fragments, which predicts that the usage of fragments in constrained by a general tendency to distribute processing effort uniformly across the utterance. With respect to fragments, this leads to two predictions, which are empirically confirmed: Speakers tend towards omitting predictable words and they insert additional redundancy before unpredictable words.

@miscellaneous{Lemke2021,
title = {Experimental investigations on the syntax and usage of fragments},
author = {Tyll Robin Lemke},
url = {https://langsci-press.org/catalog/book/321},
doi = {https://doi.org/10.5281/zenodo.5596236},
year = {2021},
date = {2021},
booktitle = {Experimental investigations on the syntax and usage of fragments},
publisher = {Language Science Press},
address = {Berlin},
abstract = {This book investigates the syntax and usage of fragments (Morgan 1973), apparently subsentential utterances like "A coffee, please!" which fulfill the same communicative function as the corresponding full sentence "I'd like to have a coffee, please!". Even though such utterances are frequently used, they challenge the central role that has been attributed to the notion of sentence in linguistic theory, particularly from a semantic perspective. The first part of the book is dedicated to the syntactic analysis of fragments, which is investigated with experimental methods. Currently there are several competing theoretical analyses of fragments, which rely almost only on introspective data. The experiments presented in this book constitute a first systematic evaluation of some of their crucial predictions and, taken together, support an in situ ellipsis account of fragments, as has been suggested by Reich (2007). The second part of the book addresses the questions of why fragments are used at all, and under which circumstances they are preferred over complete sentences. Syntactic accounts impose licensing conditions on fragments, but they do not explain, why fragments are sometimes (dis)preferred provided that their usage is licensed. This book proposes an information-theoretic account of fragments, which predicts that the usage of fragments in constrained by a general tendency to distribute processing effort uniformly across the utterance. With respect to fragments, this leads to two predictions, which are empirically confirmed: Speakers tend towards omitting predictable words and they insert additional redundancy before unpredictable words.},
pubstate = {published},
type = {miscellaneous}
}

Copy BibTeX to Clipboard

Project:   B3

Kalimuthu, Marimuthu; Mogadala, Aditya; Mosbach, Marius; Klakow, Dietrich

Fusion Models for Improved Image Captioning Inproceedings

Pattern Recognition. ICPR International Workshops and Challenges, pp. 381-395, Cham, 2020.

Visual captioning aims to generate textual descriptions given images or videos. Traditionally, image captioning models are trained on human annotated datasets such as Flickr30k and MS-COCO, which are limited in size and diversity. This limitation hinders the generalization capabilities of these models while also rendering them liable to making mistakes. Language models can, however, be trained on vast amounts of freely available unlabelled data and have recently emerged as successful language encoders and coherent text generators. Meanwhile, several unimodal and multimodal fusion techniques have been proven to work well for natural language generation and automatic speech recognition. Building on these recent developments, and with the aim of improving the quality of generated captions, the contribution of our work in this paper is two-fold: First, we propose a generic multimodal model fusion framework for caption generation as well as emendation where we utilize different fusion strategies to integrate a pretrained Auxiliary Language Model (AuxLM) within the traditional encoder-decoder visual captioning frameworks. Next, we employ the same fusion strategies to integrate a pretrained Masked Language Model (MLM), namely BERT, with a visual captioning model, viz. Show, Attend, and Tell, for emending both syntactic and semantic errors in captions. Our caption emendation experiments on three benchmark image captioning datasets, viz. Flickr8k, Flickr30k, and MSCOCO, show improvements over the baseline, indicating the usefulness of our proposed multimodal fusion strategies. Further, we perform a preliminary qualitative analysis on the emended captions and identify error categories based on the type of corrections.

@inproceedings{Kalimuthu2021fusion,
title = {Fusion Models for Improved Image Captioning},
author = {Marimuthu Kalimuthu and Aditya Mogadala and Marius Mosbach and Dietrich Klakow},
url = {https://arxiv.org/abs/2010.15251},
doi = {https://doi.org/10.1007/978-3-030-68780-9_32},
year = {2020},
date = {2020},
booktitle = {Pattern Recognition. ICPR International Workshops and Challenges},
pages = {381-395},
address = {Cham},
abstract = {Visual captioning aims to generate textual descriptions given images or videos. Traditionally, image captioning models are trained on human annotated datasets such as Flickr30k and MS-COCO, which are limited in size and diversity. This limitation hinders the generalization capabilities of these models while also rendering them liable to making mistakes. Language models can, however, be trained on vast amounts of freely available unlabelled data and have recently emerged as successful language encoders and coherent text generators. Meanwhile, several unimodal and multimodal fusion techniques have been proven to work well for natural language generation and automatic speech recognition. Building on these recent developments, and with the aim of improving the quality of generated captions, the contribution of our work in this paper is two-fold: First, we propose a generic multimodal model fusion framework for caption generation as well as emendation where we utilize different fusion strategies to integrate a pretrained Auxiliary Language Model (AuxLM) within the traditional encoder-decoder visual captioning frameworks. Next, we employ the same fusion strategies to integrate a pretrained Masked Language Model (MLM), namely BERT, with a visual captioning model, viz. Show, Attend, and Tell, for emending both syntactic and semantic errors in captions. Our caption emendation experiments on three benchmark image captioning datasets, viz. Flickr8k, Flickr30k, and MSCOCO, show improvements over the baseline, indicating the usefulness of our proposed multimodal fusion strategies. Further, we perform a preliminary qualitative analysis on the emended captions and identify error categories based on the type of corrections.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B4

Mogadala, Aditya; Mosbach, Marius; Klakow, Dietrich

Sparse Graph to Sequence Learning for Vision Conditioned Long Textual Sequence Generation Inproceedings

Bridge Between Perception and Reasoning: Graph Neural Networks & Beyond, Workshop at ICML, 2020.

Generating longer textual sequences when conditioned on the visual information is an interesting problem to explore. The challenge here proliferate over the standard vision conditioned sentence-level generation (e.g., image or video captioning) as it requires to produce a brief and coherent story describing the visual content. In this paper, we mask this Vision-to-Sequence as Graph-to-Sequence learning problem and approach it with the Transformer architecture. To be specific, we introduce Sparse Graph-to-Sequence Transformer (SGST) for encoding the graph and decoding a sequence. The encoder aims to directly encode graph-level semantics, while the decoder is used to generate longer sequences. Experiments conducted with the benchmark image paragraph dataset show that our proposed achieve 13.3% improvement on the CIDEr evaluation measure when comparing to the previous state-of-the-art approach.

@inproceedings{mogadala2020sparse,
title = {Sparse Graph to Sequence Learning for Vision Conditioned Long Textual Sequence Generation},
author = {Aditya Mogadala and Marius Mosbach and Dietrich Klakow},
url = {https://arxiv.org/abs/2007.06077},
year = {2020},
date = {2020},
booktitle = {Bridge Between Perception and Reasoning: Graph Neural Networks & Beyond, Workshop at ICML},
abstract = {Generating longer textual sequences when conditioned on the visual information is an interesting problem to explore. The challenge here proliferate over the standard vision conditioned sentence-level generation (e.g., image or video captioning) as it requires to produce a brief and coherent story describing the visual content. In this paper, we mask this Vision-to-Sequence as Graph-to-Sequence learning problem and approach it with the Transformer architecture. To be specific, we introduce Sparse Graph-to-Sequence Transformer (SGST) for encoding the graph and decoding a sequence. The encoder aims to directly encode graph-level semantics, while the decoder is used to generate longer sequences. Experiments conducted with the benchmark image paragraph dataset show that our proposed achieve 13.3% improvement on the CIDEr evaluation measure when comparing to the previous state-of-the-art approach.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B4

Ferber, Patrick; Hoffmann, Jörg; Helmert, Malte

Neural network heuristics for classical planning: A study of hyperparameter space Inproceedings

24th European Conference on Artificial Intelligence (ECAI’20), 2020.

Neural networks (NN) have been shown to be powerful state-value predictors in several complex games. Can similar successes be achieved in classical planning? Towards a systematic exploration of that question, we contribute a study of hyperparameter space in the most canonical setup: input = state, feed-forward NN, supervised learning, generalization only over initial state. We investigate a broad range of hyperparameters pertaining to NN design and training. We evaluate these techniques through their use as heuristic functions in Fast Downward. The results on IPC benchmarks show that highly competitive heuristics can be learned, yielding substantially smaller search spaces than standard techniques on some domains. But the heuristic functions are costly to evaluate, and the range of domains where useful heuristics are learned is limited. Our study provides the basis for further research improving on current weaknesses.

@inproceedings{Ferber2020network,
title = {Neural network heuristics for classical planning: A study of hyperparameter space},
author = {Patrick Ferber and J{\"o}rg Hoffmann and Malte Helmert},
url = {https://ecai2020.eu/papers/433_paper.pdf},
year = {2020},
date = {2020},
booktitle = {24th European Conference on Artificial Intelligence (ECAI’20)},
abstract = {Neural networks (NN) have been shown to be powerful state-value predictors in several complex games. Can similar successes be achieved in classical planning? Towards a systematic exploration of that question, we contribute a study of hyperparameter space in the most canonical setup: input = state, feed-forward NN, supervised learning, generalization only over initial state. We investigate a broad range of hyperparameters pertaining to NN design and training. We evaluate these techniques through their use as heuristic functions in Fast Downward. The results on IPC benchmarks show that highly competitive heuristics can be learned, yielding substantially smaller search spaces than standard techniques on some domains. But the heuristic functions are costly to evaluate, and the range of domains where useful heuristics are learned is limited. Our study provides the basis for further research improving on current weaknesses.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A7

Mecklinger, Axel; Bader, Regine

From fluency to recognition decisions: A broader view of familiarity-based remembering Journal Article

Neuropsychologia, 146, pp. 107527, 2020.

The goal of this article is to critically examine current claims and assumptions about the FN400, an event-related potential (ERP) component which has been related to familiarity memory though some uncertainty exists regarding the cognitive processes captured by the FN400. It is proposed that familiarity can be multiply determined and that an important distinction has to be made between a recent-exposure, relative familiarity mechanism indexed by the FN400 and an absolute/baseline familiarity mechanism being reflected by a coincidental but topographically distinct ERP effect. We suggest a broader conceptualization of the memory processes reflected by the FN400 and propose an unexpected fluency-attribution account of familiarity according to which familiarity results from a fast assessment of ongoing processing fluency relative to previous events or current expectations. The computations underlying fluency attribution may be closely related to those characterizing the relative familiarity mechanism underlying the FN400. We also argue that concerted activation of the perirhinal cortex (PrC) and the lateral prefrontal cortex (PFC) plays a pivotal role for fluency attributions and the generation of the FN400.

@article{MecklingerBader2020,
title = {From fluency to recognition decisions: A broader view of familiarity-based remembering},
author = {Axel Mecklinger and Regine Bader},
url = {https://www.sciencedirect.com/science/article/abs/pii/S0028393220302001},
doi = {https://doi.org/10.1016/j.neuropsychologia.2020.107527},
year = {2020},
date = {2020},
journal = {Neuropsychologia},
pages = {107527},
volume = {146},
abstract = {The goal of this article is to critically examine current claims and assumptions about the FN400, an event-related potential (ERP) component which has been related to familiarity memory though some uncertainty exists regarding the cognitive processes captured by the FN400. It is proposed that familiarity can be multiply determined and that an important distinction has to be made between a recent-exposure, relative familiarity mechanism indexed by the FN400 and an absolute/baseline familiarity mechanism being reflected by a coincidental but topographically distinct ERP effect. We suggest a broader conceptualization of the memory processes reflected by the FN400 and propose an unexpected fluency-attribution account of familiarity according to which familiarity results from a fast assessment of ongoing processing fluency relative to previous events or current expectations. The computations underlying fluency attribution may be closely related to those characterizing the relative familiarity mechanism underlying the FN400. We also argue that concerted activation of the perirhinal cortex (PrC) and the lateral prefrontal cortex (PFC) plays a pivotal role for fluency attributions and the generation of the FN400.},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   A6

Höltje, Gerrit; Mecklinger, Axel

Feedback timing modulates interactions between reward learning and memory encoding: Evidence from event-related potentials Journal Article

Cognitive, Affective and Behavioral Neuroscience, 20, pp. 250-264, 2020.

Feedback-based learning relies on a procedural learning system driven by reward prediction errors (RPEs). The processing of temporally delayed feedback is supported by brain structures associated with declarative memory processes, but it is still unknown how delayed feedback processing and memory encoding interact. In this study, a subsequent memory paradigm was employed to investigate how the incidental encoding of feedback pictures presented with a short (SD, 500 ms) or long (LD, 6500 ms) delay in a probabilistic learning task affects the event-related potential (ERP) correlate of RPEs (i.e., the feedback-related negativity; FRN). In an ensuing test phase, a surprise recognition memory test for the feedback pictures was conducted. FRN amplitudes measured in the feedback-locked ERPs recorded during the learning phase (FRNpeak) and in the negative minus positive feedback difference wave (FRNdiff) were compared for subsequently remembered and forgotten feedback pictures. Feedback processing as reflected in the FRNpeak was diminished for remembered LD feedback pictures, indicating that delayed feedback processing and memory encoding competed for similar neural processing resources. As evidenced by large FRNdiff amplitudes in the SD condition, the evaluation of shortly delayed feedback strongly relied on the procedural learning system. A complementary model-based single trial analysis was conducted to validate models of the functional significance of the FRN. Consistent with previous studies, feedback-locked N170 and P300 amplitudes were sensitive to feedback delay. In the test phase, memory for LD feedback pictures was better than for SD pictures and accompanied by a late old-new effect, presumably reflecting extended recollective processing.

@article{hoeltje2020feedback,
title = {Feedback timing modulates interactions between reward learning and memory encoding: Evidence from event-related potentials},
author = {Gerrit H{\"o}ltje and Axel Mecklinger},
url = {https://pubmed.ncbi.nlm.nih.gov/31900874/},
doi = {https://doi.org/10.3758/s13415-019-00765-5},
year = {2020},
date = {2020},
journal = {Cognitive, Affective and Behavioral Neuroscience},
pages = {250-264},
volume = {20},
number = {2},
abstract = {Feedback-based learning relies on a procedural learning system driven by reward prediction errors (RPEs). The processing of temporally delayed feedback is supported by brain structures associated with declarative memory processes, but it is still unknown how delayed feedback processing and memory encoding interact. In this study, a subsequent memory paradigm was employed to investigate how the incidental encoding of feedback pictures presented with a short (SD, 500 ms) or long (LD, 6500 ms) delay in a probabilistic learning task affects the event-related potential (ERP) correlate of RPEs (i.e., the feedback-related negativity; FRN). In an ensuing test phase, a surprise recognition memory test for the feedback pictures was conducted. FRN amplitudes measured in the feedback-locked ERPs recorded during the learning phase (FRNpeak) and in the negative minus positive feedback difference wave (FRNdiff) were compared for subsequently remembered and forgotten feedback pictures. Feedback processing as reflected in the FRNpeak was diminished for remembered LD feedback pictures, indicating that delayed feedback processing and memory encoding competed for similar neural processing resources. As evidenced by large FRNdiff amplitudes in the SD condition, the evaluation of shortly delayed feedback strongly relied on the procedural learning system. A complementary model-based single trial analysis was conducted to validate models of the functional significance of the FRN. Consistent with previous studies, feedback-locked N170 and P300 amplitudes were sensitive to feedback delay. In the test phase, memory for LD feedback pictures was better than for SD pictures and accompanied by a late old-new effect, presumably reflecting extended recollective processing.},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   A6

Ortmann, Katrin

Automatic Topological Field Identification in (Historical) German Texts Inproceedings

Proceedings of the 4th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature, pp. 10-18, Barcelona, Spain (online), 2020.

For the study of certain linguistic phenomena and their development over time, large amounts of textual data must be enriched with relevant annotations. Since the manual creation of such annotations requires a lot of effort, automating the process with NLP methods would be convenient. But the required amounts of training data are usually not available for non-standard or historical language. The present study investigates whether models trained on modern newspaper text can be used to automatically identify topological fields, i.e. syntactic structures, in different modern and historical German texts. The evaluation shows that, in general, it is possible to transfer a parser model to other registers or time periods with overall F1-scores >92%. However, an error analysis makes clear that additional rules and domain-specific training data would be beneficial if sentence structures differ significantly from the training data, e.g. in the case of Early New High German.

@inproceedings{Ortmann2020b,
title = {Automatic Topological Field Identification in (Historical) German Texts},
author = {Katrin Ortmann},
url = {https://www.aclweb.org/anthology/2020.latechclfl-1.2},
year = {2020},
date = {2020-12-12},
booktitle = {Proceedings of the 4th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature},
pages = {10-18},
address = {Barcelona, Spain (online)},
abstract = {For the study of certain linguistic phenomena and their development over time, large amounts of textual data must be enriched with relevant annotations. Since the manual creation of such annotations requires a lot of effort, automating the process with NLP methods would be convenient. But the required amounts of training data are usually not available for non-standard or historical language. The present study investigates whether models trained on modern newspaper text can be used to automatically identify topological fields, i.e. syntactic structures, in different modern and historical German texts. The evaluation shows that, in general, it is possible to transfer a parser model to other registers or time periods with overall F1-scores >92%. However, an error analysis makes clear that additional rules and domain-specific training data would be beneficial if sentence structures differ significantly from the training data, e.g. in the case of Early New High German.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   C6

Höltje, Gerrit

Interactions between immediate and delayed feedback processing and memory encoding: an investigation using event-related potentials PhD Thesis

Saarland University, Saarbruecken, Germany, 2020.

Feedback-based learning relies on a procedural learning system mediated by dopaminergic reward prediction error (RPE) signals. Recent neuroimaging research indicates that the processing of temporally delayed feedback is supported by the hippocampus, a brain structure associated with declarative memory processes, but it is still unknown how delayed feedback processing and memory encoding interact. In this dissertation project, in a series of three experiments, a subsequent memory paradigm was employed to investigate how the incidental encoding of feedback pictures in a probabilistic learning task affects the event-related potential (ERP) correlate of RPEs in feedback processing, i.e., the feedback-related negativity (FRN), and how this interaction is modulated by feedback timing, valence, and explicit outcome expectations. In Experiment 1, task-unrelated scene pictures were presented together with performance feedback in the learning task. In an ensuing test phase, a surprise recognition memory test for the pictures was conducted. FRN amplitudes measured in the feedback-locked ERPs recorded during the learning phase (FRNpeak) and in the negative minus positive feedback difference wave (FRNdiff) were compared for subsequently remembered and forgotten feedback pictures. Pictures were remembered better when presented together with positive than with negative feedback, and ERP amplitudes in the FRNdiff time window predicted subsequent memory only for positive feedback pictures. Consistent with previous studies, shortly delayed (SD, 500 ms) feedback elicited larger FRNdiff amplitudes than long delayed feedback (LD, 6500 ms), whereas the reverse pattern was found in FRNpeak amplitudes. As evidenced by behavioral estimates and ERP old/new effects, positive feedback enhanced memory by boosting familiarity-based recognition. However, feedback timing did not affect memory, presumably because participants did not need to process the scene pictures in order to learn from feedback. In Experiment 2, the picture category signaled the valence of the feedback. LD feedback pictures were associated with better memory and more recollective processing than shortly delayed ones. Feedback processing as reflected in the FRNpeak was attenuated for remembered as compared to forgotten LD feedback pictures. This suggests that when feedback was delayed, feedback processing and memory encoding competed for similar neural processing resources. As evidence by large FRNdiff amplitudes in the SD condition, the evaluation of shortly delayed feedback strongly relied on the procedural learning system. A complementary model-based single trial analysis was conducted to validate models of the functional significance of the FRN. Consistent with previous studies, feedback-locked N170 and P300 amplitudes were sensitive to feedback delay. Experiment 3 tested the hypothesis that the putative involvement of declarative learning processes in delayed feedback processing is mediated by the spontaneous generation of explicit outcome expectations during the feedback delay. A delayed feedback condition was compared with a Prediction condition in which participants were asked on each trial to predict the category of the upcoming feedback picture. Memory for the feedback pictures did not differ between the Prediction and Delay conditions. The FRNpeak subsequent memory effect obtained in Experiment 2 was replicated in both conditions, but more pronounced in the Prediction condition. As evidenced by ERP old/new effects, negative feedback pictures that disconfirmed explicit outcome expectations were associated with stronger recollective processing than those presented in the Delay condition. Positive feedback pictures elicited a recognition bias and increased familiarity signals in the memory test, which could reflect a generalization of reward value to pictures of the same category (indoor or outdoor scene). Taken together, the findings obtained in this dissertation show multiple ways by which feedback processing and memory encoding can interact, and how this interaction is shaped by feedback timing, valence, and explicit outcome expectations.


Feedbackbasiertes Lernen beruht auf einem prozeduralen Lernsystem, das auf der neurobiologischen Ebene durch dopaminerge Belohnungsvorhersagefehlersignale vermittelt wird. Studien mit bildgebenden Verfahren weisen darauf hin, dass die Verarbeitung von zeitlich verzögertem Feedback durch den Hippocampus unterstützt wird, eine Hirnstruktur, die mit deklarativen Gedächtnisprozessen assoziiert ist. Es ist jedoch noch nicht bekannt, wie die Verarbeitung von verzögertem Feedback mit der Gedächtnisenkodierung interagiert. In diesem Dissertationsprojekt wurde in einer Serie von drei Experimenten die Methode der nachfolgenden Erinnerung verwendet, um zu untersuchen, wie die inzidentelle Enkodierung von Feedbackbildern in einer probabilistischen Lernaufgabe sich auf das im ereigniskorrelierten Potenzial (EKP) messbare Korrelat von Belohnungsvorhersagefehlern in der Feedbackverarbeitung, die Feedback-Negativierung (FRN), auswirkt und wie diese Interaktion durch zeitliche Charakteristika und Valenz des Feedbacks sowie durch explizite Ergebniserwartungen moduliert wird. Im ersten Experiment wurden Bilder von Innenräumen und Landschaften zusammen mit dem Feedback in der Lernaufgabe präsentiert, wobei die Bilder nicht relevant für die Aufgabe waren. In der darauf folgenden Testphase wurde ein unerwarteter Rekognitionstest für die Bilder durchgeführt. FRN-Amplituden wurden in den während der Feedbackpräsentation aufgezeichneten EKP gemessen (FRNpeak), sowie in der Differenzwelle, die durch die Subtraktion der durch positives Feedback erzeugten EKP von den durch negatives Feedback erzeugten EKP gebildet wurde (FRNdiff). Beide FRN-Maße wurden für später erinnerte und später vergessene Bilder verglichen. Bilder, die zusammen mit positivem Feedback gezeigt wurden, wurden besser erinnert als solche, die mit negativem Feedback gepaart wurden, und EKP-Amplituden im Zeitfenster der FRNdiff prädizierten spätere Erinnerung ausschließlich für Bilder, die zusammen mit positivem Feedback präsentiert wurden. Übereinstimmend mit früheren Studien erzeugte kurz verzögertes Feedback (500 ms) größere FRNdiff-Amplituden als lang verzögertes Feedback (6500 ms), wohingegen das umgekehrte Muster für FRNpeak-Amplituden gefunden wurde. Wie durch behaviorale Maße und EKP-Alt/Neu-Effekte belegt, stärkte die Verarbeitung von positivem Feedback vor allem das vertrautheitsbasierte Erinnern der zeitgleich präsentierten Bilder, jedoch wirkten sich die zeitlichen Parameter der Feedbackpräsentation nicht auf das Gedächtnis aus, vermutlich weil eine Verarbeitung der Bilder nicht notwendig war, um das Feedback zum Lernen zu nutzen. Im zweiten Experiment wurde daher die Bildkategorie (Innenraum oder Landschaft), mit der Valenz des Feedbacks verknüpft. Lang verzögerte Feedbackbilder waren mit besserer Erinnerung und stärkerer rekollektiver Verarbeitung assoziiert als solche, die mit kurzer Verzögerung präsentiert worden waren. Die Feedbackverarbeitung, gemessen als FRNpeak-Amplitude, war geringer für lang verzögerte Feedbackbilder, die anschließend erinnert wurden als für solche, die nicht erinnert wurden. Dies legt nahe, dass die Verarbeitung von zeitlich verzögertem Feedback und die Gedächtnisenkodierung auf ähnliche neuronale Verarbeitungskapazitäten zugreifen. Wie anhand von FRNdiff-Amplituden ersichtlich, beruhte die Evaluation von zeitlich kurz verzögertem Feedback in starkem Ausmaß auf dem prozeduralen Lernsystem. Eine ergänzende, modellbasierte Analyse auf der Ebene einzelner Lerndurchgänge wurde durchgeführt, um Modelle der funktionalen Bedeutsamkeit der FRN zu validieren. Übereinstimmend mit vorherigen Studien wurden durch die Feedbackverarbeitung hervorgerufene N170- und P300-Amplituden durch die zeitliche Verzögerung des Feedbacks moduliert. Das dritte Experiment überprüfte die Hypothese, dass die mutmaßliche Beteiligung von deklarativen Lernprozessen bei der Verarbeitung von verzögertem Feedback durch die spontane Entwicklung expliziter Ergebniserwartungen während der Feedbackverzögerung vermittelt wird. Eine Bedingung mit verzögertem Feedback wurde mit einer Vorhersage-Bedingung kontrastiert, in der die Probanden in jedem Lerndurchgang die Kategorie des Feedbackbildes prädizierten. Die Erinnerung an die Feedbackbilder unterschied sich nicht zwischen den beiden Bedingungen. Der Effekt der nachfolgenden Erinnerung in den FRNpeak-Amplituden, der in Experiment 2 gefunden wurde, wurde in beiden Bedingungen repliziert, war jedoch in der Vorhersage-Bedingung stärker ausgeprägt. Wie durch EKP-Alt/Neu-Effekte belegt, waren negative Feedbackbilder, die die explizite Erwartung eines positiven Ergebnisses verletzten, mit einer stärkeren rekollektiven Verarbeitung verknüpft. Positive Bilder waren im Gedächtnistest mit besonders vielen falsch positiven Gedächtnisurteilen assoziiert, was mit einer Generalisierung des Belohnungswertes zu Bildern der gleichen Kategorie zusammenhängen könnte. Zusammengefasst zeigen die Ergebnisse dieser Dissertation, dass die Feedbackverarbeitung und die Gedächtnisenkodierung auf mehreren Wegen interagieren können. Die zeitlichen Charakteristika der Feedbackpräsentation, die Valenz des Feedbacks und explizite Ergebniserwartungen stellen wichtige Faktoren dar, die diese Interaktion beeinflussen.

@phdthesis{Höltje_Diss_2020,
title = {Interactions between immediate and delayed feedback processing and memory encoding: an investigation using event-related potentials},
author = {Gerrit H{\"o}ltje},
url = {https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/30348},
doi = {https://doi.org/https://dx.doi.org/10.22028/D291-32889},
year = {2020},
date = {2020},
school = {Saarland University},
address = {Saarbruecken, Germany},
abstract = {Feedback-based learning relies on a procedural learning system mediated by dopaminergic reward prediction error (RPE) signals. Recent neuroimaging research indicates that the processing of temporally delayed feedback is supported by the hippocampus, a brain structure associated with declarative memory processes, but it is still unknown how delayed feedback processing and memory encoding interact. In this dissertation project, in a series of three experiments, a subsequent memory paradigm was employed to investigate how the incidental encoding of feedback pictures in a probabilistic learning task affects the event-related potential (ERP) correlate of RPEs in feedback processing, i.e., the feedback-related negativity (FRN), and how this interaction is modulated by feedback timing, valence, and explicit outcome expectations. In Experiment 1, task-unrelated scene pictures were presented together with performance feedback in the learning task. In an ensuing test phase, a surprise recognition memory test for the pictures was conducted. FRN amplitudes measured in the feedback-locked ERPs recorded during the learning phase (FRNpeak) and in the negative minus positive feedback difference wave (FRNdiff) were compared for subsequently remembered and forgotten feedback pictures. Pictures were remembered better when presented together with positive than with negative feedback, and ERP amplitudes in the FRNdiff time window predicted subsequent memory only for positive feedback pictures. Consistent with previous studies, shortly delayed (SD, 500 ms) feedback elicited larger FRNdiff amplitudes than long delayed feedback (LD, 6500 ms), whereas the reverse pattern was found in FRNpeak amplitudes. As evidenced by behavioral estimates and ERP old/new effects, positive feedback enhanced memory by boosting familiarity-based recognition. However, feedback timing did not affect memory, presumably because participants did not need to process the scene pictures in order to learn from feedback. In Experiment 2, the picture category signaled the valence of the feedback. LD feedback pictures were associated with better memory and more recollective processing than shortly delayed ones. Feedback processing as reflected in the FRNpeak was attenuated for remembered as compared to forgotten LD feedback pictures. This suggests that when feedback was delayed, feedback processing and memory encoding competed for similar neural processing resources. As evidence by large FRNdiff amplitudes in the SD condition, the evaluation of shortly delayed feedback strongly relied on the procedural learning system. A complementary model-based single trial analysis was conducted to validate models of the functional significance of the FRN. Consistent with previous studies, feedback-locked N170 and P300 amplitudes were sensitive to feedback delay. Experiment 3 tested the hypothesis that the putative involvement of declarative learning processes in delayed feedback processing is mediated by the spontaneous generation of explicit outcome expectations during the feedback delay. A delayed feedback condition was compared with a Prediction condition in which participants were asked on each trial to predict the category of the upcoming feedback picture. Memory for the feedback pictures did not differ between the Prediction and Delay conditions. The FRNpeak subsequent memory effect obtained in Experiment 2 was replicated in both conditions, but more pronounced in the Prediction condition. As evidenced by ERP old/new effects, negative feedback pictures that disconfirmed explicit outcome expectations were associated with stronger recollective processing than those presented in the Delay condition. Positive feedback pictures elicited a recognition bias and increased familiarity signals in the memory test, which could reflect a generalization of reward value to pictures of the same category (indoor or outdoor scene). Taken together, the findings obtained in this dissertation show multiple ways by which feedback processing and memory encoding can interact, and how this interaction is shaped by feedback timing, valence, and explicit outcome expectations.


Feedbackbasiertes Lernen beruht auf einem prozeduralen Lernsystem, das auf der neurobiologischen Ebene durch dopaminerge Belohnungsvorhersagefehlersignale vermittelt wird. Studien mit bildgebenden Verfahren weisen darauf hin, dass die Verarbeitung von zeitlich verz{\"o}gertem Feedback durch den Hippocampus unterst{\"u}tzt wird, eine Hirnstruktur, die mit deklarativen Ged{\"a}chtnisprozessen assoziiert ist. Es ist jedoch noch nicht bekannt, wie die Verarbeitung von verz{\"o}gertem Feedback mit der Ged{\"a}chtnisenkodierung interagiert. In diesem Dissertationsprojekt wurde in einer Serie von drei Experimenten die Methode der nachfolgenden Erinnerung verwendet, um zu untersuchen, wie die inzidentelle Enkodierung von Feedbackbildern in einer probabilistischen Lernaufgabe sich auf das im ereigniskorrelierten Potenzial (EKP) messbare Korrelat von Belohnungsvorhersagefehlern in der Feedbackverarbeitung, die Feedback-Negativierung (FRN), auswirkt und wie diese Interaktion durch zeitliche Charakteristika und Valenz des Feedbacks sowie durch explizite Ergebniserwartungen moduliert wird. Im ersten Experiment wurden Bilder von Innenr{\"a}umen und Landschaften zusammen mit dem Feedback in der Lernaufgabe pr{\"a}sentiert, wobei die Bilder nicht relevant f{\"u}r die Aufgabe waren. In der darauf folgenden Testphase wurde ein unerwarteter Rekognitionstest f{\"u}r die Bilder durchgef{\"u}hrt. FRN-Amplituden wurden in den w{\"a}hrend der Feedbackpr{\"a}sentation aufgezeichneten EKP gemessen (FRNpeak), sowie in der Differenzwelle, die durch die Subtraktion der durch positives Feedback erzeugten EKP von den durch negatives Feedback erzeugten EKP gebildet wurde (FRNdiff). Beide FRN-Ma{\ss}e wurden f{\"u}r sp{\"a}ter erinnerte und sp{\"a}ter vergessene Bilder verglichen. Bilder, die zusammen mit positivem Feedback gezeigt wurden, wurden besser erinnert als solche, die mit negativem Feedback gepaart wurden, und EKP-Amplituden im Zeitfenster der FRNdiff pr{\"a}dizierten sp{\"a}tere Erinnerung ausschlie{\ss}lich f{\"u}r Bilder, die zusammen mit positivem Feedback pr{\"a}sentiert wurden. {\"U}bereinstimmend mit fr{\"u}heren Studien erzeugte kurz verz{\"o}gertes Feedback (500 ms) gr{\"o}{\ss}ere FRNdiff-Amplituden als lang verz{\"o}gertes Feedback (6500 ms), wohingegen das umgekehrte Muster f{\"u}r FRNpeak-Amplituden gefunden wurde. Wie durch behaviorale Ma{\ss}e und EKP-Alt/Neu-Effekte belegt, st{\"a}rkte die Verarbeitung von positivem Feedback vor allem das vertrautheitsbasierte Erinnern der zeitgleich pr{\"a}sentierten Bilder, jedoch wirkten sich die zeitlichen Parameter der Feedbackpr{\"a}sentation nicht auf das Ged{\"a}chtnis aus, vermutlich weil eine Verarbeitung der Bilder nicht notwendig war, um das Feedback zum Lernen zu nutzen. Im zweiten Experiment wurde daher die Bildkategorie (Innenraum oder Landschaft), mit der Valenz des Feedbacks verkn{\"u}pft. Lang verz{\"o}gerte Feedbackbilder waren mit besserer Erinnerung und st{\"a}rkerer rekollektiver Verarbeitung assoziiert als solche, die mit kurzer Verz{\"o}gerung pr{\"a}sentiert worden waren. Die Feedbackverarbeitung, gemessen als FRNpeak-Amplitude, war geringer f{\"u}r lang verz{\"o}gerte Feedbackbilder, die anschlie{\ss}end erinnert wurden als f{\"u}r solche, die nicht erinnert wurden. Dies legt nahe, dass die Verarbeitung von zeitlich verz{\"o}gertem Feedback und die Ged{\"a}chtnisenkodierung auf {\"a}hnliche neuronale Verarbeitungskapazit{\"a}ten zugreifen. Wie anhand von FRNdiff-Amplituden ersichtlich, beruhte die Evaluation von zeitlich kurz verz{\"o}gertem Feedback in starkem Ausma{\ss} auf dem prozeduralen Lernsystem. Eine erg{\"a}nzende, modellbasierte Analyse auf der Ebene einzelner Lerndurchg{\"a}nge wurde durchgef{\"u}hrt, um Modelle der funktionalen Bedeutsamkeit der FRN zu validieren. {\"U}bereinstimmend mit vorherigen Studien wurden durch die Feedbackverarbeitung hervorgerufene N170- und P300-Amplituden durch die zeitliche Verz{\"o}gerung des Feedbacks moduliert. Das dritte Experiment {\"u}berpr{\"u}fte die Hypothese, dass die mutma{\ss}liche Beteiligung von deklarativen Lernprozessen bei der Verarbeitung von verz{\"o}gertem Feedback durch die spontane Entwicklung expliziter Ergebniserwartungen w{\"a}hrend der Feedbackverz{\"o}gerung vermittelt wird. Eine Bedingung mit verz{\"o}gertem Feedback wurde mit einer Vorhersage-Bedingung kontrastiert, in der die Probanden in jedem Lerndurchgang die Kategorie des Feedbackbildes pr{\"a}dizierten. Die Erinnerung an die Feedbackbilder unterschied sich nicht zwischen den beiden Bedingungen. Der Effekt der nachfolgenden Erinnerung in den FRNpeak-Amplituden, der in Experiment 2 gefunden wurde, wurde in beiden Bedingungen repliziert, war jedoch in der Vorhersage-Bedingung st{\"a}rker ausgepr{\"a}gt. Wie durch EKP-Alt/Neu-Effekte belegt, waren negative Feedbackbilder, die die explizite Erwartung eines positiven Ergebnisses verletzten, mit einer st{\"a}rkeren rekollektiven Verarbeitung verkn{\"u}pft. Positive Bilder waren im Ged{\"a}chtnistest mit besonders vielen falsch positiven Ged{\"a}chtnisurteilen assoziiert, was mit einer Generalisierung des Belohnungswertes zu Bildern der gleichen Kategorie zusammenh{\"a}ngen k{\"o}nnte. Zusammengefasst zeigen die Ergebnisse dieser Dissertation, dass die Feedbackverarbeitung und die Ged{\"a}chtnisenkodierung auf mehreren Wegen interagieren k{\"o}nnen. Die zeitlichen Charakteristika der Feedbackpr{\"a}sentation, die Valenz des Feedbacks und explizite Ergebniserwartungen stellen wichtige Faktoren dar, die diese Interaktion beeinflussen.},
pubstate = {published},
type = {phdthesis}
}

Copy BibTeX to Clipboard

Project:   A6

Shi, Wei

Addressing the data bottleneck in implicit discourse relation classification PhD Thesis

Saarland University, Saarbruecken, Germany, 2020.

When humans comprehend language, their interpretation consists of more than just the sum of the content of the sentences. Additional logic and semantic links (known as coherence relations or discourse relations) are inferred between sentences/clauses in the text. The identification of discourse relations is beneficial for various NLP applications such as question-answering, summarization, machine translation, information extraction, etc. Discourse relations are categorized into implicit and explicit discourse relations depending on whether there is an explicit discourse marker between the arguments. In this thesis, we mainly focus on the implicit discourse relation classification, given that with the explicit markers acting as informative cues, the explicit relations are relatively easier to identify for machines. The recent neural network-based approaches in particular suffer from insufficient training (and test) data. As shown in Chapter 3 of this thesis, we start out by showing to what extent the limited data size is a problem in implicit discourse relation classification and propose data augmentation methods with the help of cross-lingual data. And then we propose several approaches for better exploiting and encoding various types of existing data in the discourse relation classification task. Most of the existing machine learning methods train on sections 2-21 of the PDTB and test on section 23, which only includes a total of less than 800 implicit discourse relation instances. With the help of cross validation, we argue that the standard test section of the PDTB is too small to draw conclusions upon. With more test samples in the cross validation, we would come to very different conclusions about whether a feature is generally useful. Second, we propose a simple approach to automatically extract samples of implicit discourse relations from multilingual parallel corpus via back-translation. After back-translating from target languages, it is easy for the discourse parser to identify those examples that are originally implicit but explicit in the back-translations. Having those additional data in the training set, the experiments show significant improvements on different settings. Finally, having better encoding ability is also of crucial importance in terms of improving classification performance. We propose different methods including a sequence-to-sequence neural network and a memory component to help have a better representation of the arguments. We also show that having the correct next sentence is beneficial for the task within and across domains, with the help of the BERT (Devlin et al., 2019) model. When it comes to a new domain, it is beneficial to integrate external domain-specific knowledge. In Chapter 8, we show that with the entity-enhancement, the performance on BioDRB is improved significantly, comparing with other BERT-based methods. In sum, the studies reported in this dissertation contribute to addressing the data bottleneck problem in implicit discourse relation classification and propose corresponding approaches that achieve 54.82% and 69.57% on PDTB and BioDRB respectively.


Wenn Menschen Sprache verstehen, besteht ihre Interpretation aus mehr als nur der Summe des Inhalts der Sätze. Zwischen Sätzen im Text werden zusätzliche logische und semantische Verknüpfungen (sogenannte Kohärenzrelationen oder Diskursrelationen) hergeleitet. Die Identifizierung von Diskursrelationen ist für verschiedene NLP-Anwendungen wie Frage- Antwort, Zusammenfassung, maschinelle Übersetzung, Informationsextraktion usw. von Vorteil. Diskursrelationen werden in implizite und explizite Diskursrelationen unterteilt, je nachdem, ob es eine explizite Diskursrelationen zwischen den Argumenten gibt. In dieser Arbeit konzentrieren wir uns hauptsächlich auf die Klassifizierung der impliziten Diskursrelationen, da die expliziten Marker als hilfreiche Hinweise dienen und die expliziten Beziehungen für Maschinen relativ leicht zu identifizieren sind. Es wurden verschiedene Ansätze vorgeschlagen, die bei der impliziten Diskursrelationsklassifikation beeindruckende Ergebnisse erzielt haben. Die meisten von ihnen leiden jedoch darunter, dass die Daten für auf neuronalen Netzen basierende Methoden unzureichend sind. In dieser Arbeit gehen wir zunächst auf das Problem begrenzter Daten bei dieser Aufgabe ein und schlagen dann Methoden zur Datenanreicherung mit Hilfe von sprachübergreifenden Daten vor. Zuletzt schlagen wir mehrere Methoden vor, um die Argumente aus verschiedenen Aspekten besser kodieren zu können. Die meisten der existierenden Methoden des maschinellen Lernens werden auf den Abschnitten 2-21 der PDTB trainiert und auf dem Abschnitt 23 getestet, der insgesamt nur weniger als 800 implizite Diskursrelationsinstanzen enthält. Mit Hilfe der Kreuzvalidierung argumentieren wir, dass der Standardtestausschnitt der PDTB zu klein ist um daraus Schlussfolgerungen zu ziehen. Mit mehr Teststichproben in der Kreuzvalidierung würden wir zu anderen Schlussfolgerungen darüber kommen, ob ein Merkmal für diese Aufgabe generell vorteilhaft ist oder nicht, insbesondere wenn wir einen relativ großen Labelsatz verwenden. Wenn wir nur unseren kleinen Standardtestsatz herausstellen, laufen wir Gefahr, falsche Schlüsse darüber zu ziehen, welche Merkmale hilfreich sind. Zweitens schlagen wir einen einfachen Ansatz zur automatischen Extraktion von Samples impliziter Diskursrelationen aus mehrsprachigen Parallelkorpora durch Rückübersetzung vor. Er ist durch den Explikationsprozess motiviert, wenn Menschen einen Text übersetzen. Nach der Rückübersetzung aus den Zielsprachen ist es für den Diskursparser leicht, diejenigen Beispiele zu identifizieren, die ursprünglich implizit, in den Rückübersetzungen aber explizit enthalten sind. Da diese zusätzlichen Daten im Trainingsset enthalten sind, zeigen die Experimente signifikante Verbesserungen in verschiedenen Situationen. Wir verwenden zunächst nur französisch-englische Paare und haben keine Kontrolle über die Qualität und konzentrieren uns meist auf die satzinternen Relationen. Um diese Fragen in Angriff zu nehmen, erweitern wir die Idee später mit mehr Vorverarbeitungsschritten und mehr Sprachpaaren. Mit den Mehrheitsentscheidungen aus verschiedenen Sprachpaaren sind die gemappten impliziten Labels zuverlässiger. Schließlich ist auch eine bessere Kodierfähigkeit von entscheidender Bedeutung für die Verbesserung der Klassifizierungsleistung. Wir schlagen ein neues Modell vor, das aus einem Klassifikator und einem Sequenz-zu-Sequenz-Modell besteht. Neben der korrekten Vorhersage des Labels werden sie auch darauf trainiert, eine Repräsentation der Diskursrelationsargumente zu erzeugen, indem sie versuchen, die Argumente einschließlich eines geeigneten impliziten Konnektivs vorherzusagen. Die neuartige sekundäre Aufgabe zwingt die interne Repräsentation dazu, die Semantik der Relationsargumente vollständiger zu kodieren und eine feinkörnigere Klassifikation vorzunehmen. Um das allgemeine Wissen in Kontexten weiter zu erfassen, setzen wir auch ein Gedächtnisnetzwerk ein, um eine explizite Kontextrepräsentation von Trainingsbeispielen für Kontexte zu erhalten. Für jede Testinstanz erzeugen wir durch gewichtetes Lesen des Gedächtnisses einen Wissensvektor. Wir evaluieren das vorgeschlagene Modell unter verschiedenen Bedingungen und die Ergebnisse zeigen, dass das Modell mit dem Speichernetzwerk die Vorhersage von Diskursrelationen erleichtern kann, indem es Beispiele auswählt, die eine ähnliche semantische Repräsentation und Diskursrelationen aufweisen. Auch wenn ein besseres Verständnis, eine Kodierung und semantische Interpretation für die Aufgabe der impliziten Diskursrelationsklassifikation unerlässlich und nützlich sind, so leistet sie doch nur einen Teil der Arbeit. Ein guter impliziter Diskursrelationsklassifikator sollte sich auch der bevorstehenden Ereignisse, Ursachen, Folgen usw. bewusst sein, um die Diskurserwartung in die Satzdarstellungen zu kodieren. Mit Hilfe des kürzlich vorgeschlagenen BERT-Modells versuchen wir herauszufinden, ob es für die Aufgabe vorteilhaft ist, den richtigen nächsten Satz zu haben oder nicht. Die experimentellen Ergebnisse zeigen, dass das Entfernen der Aufgabe zur Vorhersage des nächsten Satzes die Leistung sowohl innerhalb der Domäne als auch domänenübergreifend stark beeinträchtigt. Die begrenzte Fähigkeit von BioBERT, domänenspezifisches Wissen, d.h. Entitätsinformationen, Entitätsbeziehungen etc. zu erlernen, motiviert uns, externes Wissen in die vortrainierten Sprachmodelle zu integrieren. Wir schlagen eine unüberwachte Methode vor, bei der Information-Retrieval-System und Wissensgraphen-Techniken verwendet werden, mit der Annahme, dass, wenn zwei Instanzen ähnliche Entitäten in beiden relationalen Argumenten teilen, die Wahrscheinlichkeit groß ist, dass sie die gleiche oder eine ähnliche Diskursrelation haben. Der Ansatz erzielt vergleichbare Ergebnisse auf BioDRB, verglichen mit Baselinemodellen. Anschließend verwenden wir die extrahierten relevanten Entitäten zur Verbesserung des vortrainierten Modells K-BERT, um die Bedeutung der Argumente besser zu kodieren und das ursprüngliche BERT und BioBERT mit einer Genauigkeit von 6,5% bzw. 2% zu übertreffen. Zusammenfassend trägt diese Dissertation dazu bei, das Problem des Datenengpasses bei der impliziten Diskursrelationsklassifikation anzugehen, und schlägt entsprechende Ansätze in verschiedenen Aspekten vor, u.a. die Darstellung des begrenzten Datenproblems und der Risiken bei der Schlussfolgerung daraus; die Erfassung automatisch annotierter Daten durch den Explikationsprozess während der manuellen Übersetzung zwischen Englisch und anderen Sprachen; eine bessere Repräsentation von Diskursrelationsargumenten; Entity-Enhancement mit einer unüberwachten Methode und einem vortrainierten Sprachmodell.2

@phdthesis{Shi_Diss_2020,
title = {Addressing the data bottleneck in implicit discourse relation classification},
author = {Wei Shi},
url = {https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/30143},
doi = {https://doi.org/https://dx.doi.org/10.22028/D291-32711},
year = {2020},
date = {2020},
school = {Saarland University},
address = {Saarbruecken, Germany},
abstract = {When humans comprehend language, their interpretation consists of more than just the sum of the content of the sentences. Additional logic and semantic links (known as coherence relations or discourse relations) are inferred between sentences/clauses in the text. The identification of discourse relations is beneficial for various NLP applications such as question-answering, summarization, machine translation, information extraction, etc. Discourse relations are categorized into implicit and explicit discourse relations depending on whether there is an explicit discourse marker between the arguments. In this thesis, we mainly focus on the implicit discourse relation classification, given that with the explicit markers acting as informative cues, the explicit relations are relatively easier to identify for machines. The recent neural network-based approaches in particular suffer from insufficient training (and test) data. As shown in Chapter 3 of this thesis, we start out by showing to what extent the limited data size is a problem in implicit discourse relation classification and propose data augmentation methods with the help of cross-lingual data. And then we propose several approaches for better exploiting and encoding various types of existing data in the discourse relation classification task. Most of the existing machine learning methods train on sections 2-21 of the PDTB and test on section 23, which only includes a total of less than 800 implicit discourse relation instances. With the help of cross validation, we argue that the standard test section of the PDTB is too small to draw conclusions upon. With more test samples in the cross validation, we would come to very different conclusions about whether a feature is generally useful. Second, we propose a simple approach to automatically extract samples of implicit discourse relations from multilingual parallel corpus via back-translation. After back-translating from target languages, it is easy for the discourse parser to identify those examples that are originally implicit but explicit in the back-translations. Having those additional data in the training set, the experiments show significant improvements on different settings. Finally, having better encoding ability is also of crucial importance in terms of improving classification performance. We propose different methods including a sequence-to-sequence neural network and a memory component to help have a better representation of the arguments. We also show that having the correct next sentence is beneficial for the task within and across domains, with the help of the BERT (Devlin et al., 2019) model. When it comes to a new domain, it is beneficial to integrate external domain-specific knowledge. In Chapter 8, we show that with the entity-enhancement, the performance on BioDRB is improved significantly, comparing with other BERT-based methods. In sum, the studies reported in this dissertation contribute to addressing the data bottleneck problem in implicit discourse relation classification and propose corresponding approaches that achieve 54.82% and 69.57% on PDTB and BioDRB respectively.


Wenn Menschen Sprache verstehen, besteht ihre Interpretation aus mehr als nur der Summe des Inhalts der S{\"a}tze. Zwischen S{\"a}tzen im Text werden zus{\"a}tzliche logische und semantische Verkn{\"u}pfungen (sogenannte Koh{\"a}renzrelationen oder Diskursrelationen) hergeleitet. Die Identifizierung von Diskursrelationen ist f{\"u}r verschiedene NLP-Anwendungen wie Frage- Antwort, Zusammenfassung, maschinelle {\"U}bersetzung, Informationsextraktion usw. von Vorteil. Diskursrelationen werden in implizite und explizite Diskursrelationen unterteilt, je nachdem, ob es eine explizite Diskursrelationen zwischen den Argumenten gibt. In dieser Arbeit konzentrieren wir uns haupts{\"a}chlich auf die Klassifizierung der impliziten Diskursrelationen, da die expliziten Marker als hilfreiche Hinweise dienen und die expliziten Beziehungen f{\"u}r Maschinen relativ leicht zu identifizieren sind. Es wurden verschiedene Ans{\"a}tze vorgeschlagen, die bei der impliziten Diskursrelationsklassifikation beeindruckende Ergebnisse erzielt haben. Die meisten von ihnen leiden jedoch darunter, dass die Daten f{\"u}r auf neuronalen Netzen basierende Methoden unzureichend sind. In dieser Arbeit gehen wir zun{\"a}chst auf das Problem begrenzter Daten bei dieser Aufgabe ein und schlagen dann Methoden zur Datenanreicherung mit Hilfe von sprach{\"u}bergreifenden Daten vor. Zuletzt schlagen wir mehrere Methoden vor, um die Argumente aus verschiedenen Aspekten besser kodieren zu k{\"o}nnen. Die meisten der existierenden Methoden des maschinellen Lernens werden auf den Abschnitten 2-21 der PDTB trainiert und auf dem Abschnitt 23 getestet, der insgesamt nur weniger als 800 implizite Diskursrelationsinstanzen enth{\"a}lt. Mit Hilfe der Kreuzvalidierung argumentieren wir, dass der Standardtestausschnitt der PDTB zu klein ist um daraus Schlussfolgerungen zu ziehen. Mit mehr Teststichproben in der Kreuzvalidierung w{\"u}rden wir zu anderen Schlussfolgerungen dar{\"u}ber kommen, ob ein Merkmal f{\"u}r diese Aufgabe generell vorteilhaft ist oder nicht, insbesondere wenn wir einen relativ gro{\ss}en Labelsatz verwenden. Wenn wir nur unseren kleinen Standardtestsatz herausstellen, laufen wir Gefahr, falsche Schl{\"u}sse dar{\"u}ber zu ziehen, welche Merkmale hilfreich sind. Zweitens schlagen wir einen einfachen Ansatz zur automatischen Extraktion von Samples impliziter Diskursrelationen aus mehrsprachigen Parallelkorpora durch R{\"u}ck{\"u}bersetzung vor. Er ist durch den Explikationsprozess motiviert, wenn Menschen einen Text {\"u}bersetzen. Nach der R{\"u}ck{\"u}bersetzung aus den Zielsprachen ist es f{\"u}r den Diskursparser leicht, diejenigen Beispiele zu identifizieren, die urspr{\"u}nglich implizit, in den R{\"u}ck{\"u}bersetzungen aber explizit enthalten sind. Da diese zus{\"a}tzlichen Daten im Trainingsset enthalten sind, zeigen die Experimente signifikante Verbesserungen in verschiedenen Situationen. Wir verwenden zun{\"a}chst nur franz{\"o}sisch-englische Paare und haben keine Kontrolle {\"u}ber die Qualit{\"a}t und konzentrieren uns meist auf die satzinternen Relationen. Um diese Fragen in Angriff zu nehmen, erweitern wir die Idee sp{\"a}ter mit mehr Vorverarbeitungsschritten und mehr Sprachpaaren. Mit den Mehrheitsentscheidungen aus verschiedenen Sprachpaaren sind die gemappten impliziten Labels zuverl{\"a}ssiger. Schlie{\ss}lich ist auch eine bessere Kodierf{\"a}higkeit von entscheidender Bedeutung f{\"u}r die Verbesserung der Klassifizierungsleistung. Wir schlagen ein neues Modell vor, das aus einem Klassifikator und einem Sequenz-zu-Sequenz-Modell besteht. Neben der korrekten Vorhersage des Labels werden sie auch darauf trainiert, eine Repr{\"a}sentation der Diskursrelationsargumente zu erzeugen, indem sie versuchen, die Argumente einschlie{\ss}lich eines geeigneten impliziten Konnektivs vorherzusagen. Die neuartige sekund{\"a}re Aufgabe zwingt die interne Repr{\"a}sentation dazu, die Semantik der Relationsargumente vollst{\"a}ndiger zu kodieren und eine feink{\"o}rnigere Klassifikation vorzunehmen. Um das allgemeine Wissen in Kontexten weiter zu erfassen, setzen wir auch ein Ged{\"a}chtnisnetzwerk ein, um eine explizite Kontextrepr{\"a}sentation von Trainingsbeispielen f{\"u}r Kontexte zu erhalten. F{\"u}r jede Testinstanz erzeugen wir durch gewichtetes Lesen des Ged{\"a}chtnisses einen Wissensvektor. Wir evaluieren das vorgeschlagene Modell unter verschiedenen Bedingungen und die Ergebnisse zeigen, dass das Modell mit dem Speichernetzwerk die Vorhersage von Diskursrelationen erleichtern kann, indem es Beispiele ausw{\"a}hlt, die eine {\"a}hnliche semantische Repr{\"a}sentation und Diskursrelationen aufweisen. Auch wenn ein besseres Verst{\"a}ndnis, eine Kodierung und semantische Interpretation f{\"u}r die Aufgabe der impliziten Diskursrelationsklassifikation unerl{\"a}sslich und n{\"u}tzlich sind, so leistet sie doch nur einen Teil der Arbeit. Ein guter impliziter Diskursrelationsklassifikator sollte sich auch der bevorstehenden Ereignisse, Ursachen, Folgen usw. bewusst sein, um die Diskurserwartung in die Satzdarstellungen zu kodieren. Mit Hilfe des k{\"u}rzlich vorgeschlagenen BERT-Modells versuchen wir herauszufinden, ob es f{\"u}r die Aufgabe vorteilhaft ist, den richtigen n{\"a}chsten Satz zu haben oder nicht. Die experimentellen Ergebnisse zeigen, dass das Entfernen der Aufgabe zur Vorhersage des n{\"a}chsten Satzes die Leistung sowohl innerhalb der Dom{\"a}ne als auch dom{\"a}nen{\"u}bergreifend stark beeintr{\"a}chtigt. Die begrenzte F{\"a}higkeit von BioBERT, dom{\"a}nenspezifisches Wissen, d.h. Entit{\"a}tsinformationen, Entit{\"a}tsbeziehungen etc. zu erlernen, motiviert uns, externes Wissen in die vortrainierten Sprachmodelle zu integrieren. Wir schlagen eine un{\"u}berwachte Methode vor, bei der Information-Retrieval-System und Wissensgraphen-Techniken verwendet werden, mit der Annahme, dass, wenn zwei Instanzen {\"a}hnliche Entit{\"a}ten in beiden relationalen Argumenten teilen, die Wahrscheinlichkeit gro{\ss} ist, dass sie die gleiche oder eine {\"a}hnliche Diskursrelation haben. Der Ansatz erzielt vergleichbare Ergebnisse auf BioDRB, verglichen mit Baselinemodellen. Anschlie{\ss}end verwenden wir die extrahierten relevanten Entit{\"a}ten zur Verbesserung des vortrainierten Modells K-BERT, um die Bedeutung der Argumente besser zu kodieren und das urspr{\"u}ngliche BERT und BioBERT mit einer Genauigkeit von 6,5% bzw. 2% zu {\"u}bertreffen. Zusammenfassend tr{\"a}gt diese Dissertation dazu bei, das Problem des Datenengpasses bei der impliziten Diskursrelationsklassifikation anzugehen, und schl{\"a}gt entsprechende Ans{\"a}tze in verschiedenen Aspekten vor, u.a. die Darstellung des begrenzten Datenproblems und der Risiken bei der Schlussfolgerung daraus; die Erfassung automatisch annotierter Daten durch den Explikationsprozess w{\"a}hrend der manuellen {\"U}bersetzung zwischen Englisch und anderen Sprachen; eine bessere Repr{\"a}sentation von Diskursrelationsargumenten; Entity-Enhancement mit einer un{\"u}berwachten Methode und einem vortrainierten Sprachmodell.2},
pubstate = {published},
type = {phdthesis}
}

Copy BibTeX to Clipboard

Project:   B2

Mosbach, Marius; Degaetano-Ortlieb, Stefania; Krielke, Marie-Pauline; Abdullah, Badr M.; Klakow, Dietrich

A Closer Look at Linguistic Knowledge in Masked Language Models: The Case of Relative Clauses in American English Inproceedings

Proceedings of the 28th International Conference on Computational Linguistics, pp. 771-787, 2020.

Transformer-based language models achieve high performance on various tasks, but we still lack understanding of the kind of linguistic knowledge they learn and rely on. We evaluate three models (BERT, RoBERTa, and ALBERT), testing their grammatical and semantic knowledge by sentence-level probing, diagnostic cases, and masked prediction tasks. We focus on relative clauses (in American English) as a complex phenomenon needing contextual information and antecedent identification to be resolved. Based on a naturalistic dataset, probing shows that all three models indeed capture linguistic knowledge about grammaticality, achieving high performance. Evaluation on diagnostic cases and masked prediction tasks considering fine-grained linguistic knowledge, however, shows pronounced model-specific weaknesses especially on semantic knowledge, strongly impacting models’ performance. Our results highlight the importance of (a) model comparison in evaluation task and (b) building up claims of model performance and the linguistic knowledge they capture beyond purely probing-based evaluations.

@inproceedings{Mosbach2020,
title = {A Closer Look at Linguistic Knowledge in Masked Language Models: The Case of Relative Clauses in American English},
author = {Marius Mosbach and Stefania Degaetano-Ortlieb and Marie-Pauline Krielke and Badr M. Abdullah and Dietrich Klakow},
url = {https://aclanthology.org/2020.coling-main.67/},
year = {2020},
date = {2020},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
pages = {771-787},
abstract = {Transformer-based language models achieve high performance on various tasks, but we still lack understanding of the kind of linguistic knowledge they learn and rely on. We evaluate three models (BERT, RoBERTa, and ALBERT), testing their grammatical and semantic knowledge by sentence-level probing, diagnostic cases, and masked prediction tasks. We focus on relative clauses (in American English) as a complex phenomenon needing contextual information and antecedent identification to be resolved. Based on a naturalistic dataset, probing shows that all three models indeed capture linguistic knowledge about grammaticality, achieving high performance. Evaluation on diagnostic cases and masked prediction tasks considering fine-grained linguistic knowledge, however, shows pronounced model-specific weaknesses especially on semantic knowledge, strongly impacting models’ performance. Our results highlight the importance of (a) model comparison in evaluation task and (b) building up claims of model performance and the linguistic knowledge they capture beyond purely probing-based evaluations.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Projects:   B1 B4 C4

Juzek, Tom; Krielke, Marie-Pauline; Teich, Elke

Exploring diachronic syntactic shifts with dependency length: the case of scientific English Inproceedings

Proceedings of the Fourth Workshop on Universal Dependencies (UDW 2020), Association for Computational Linguistics, pp. 109-119, Barcelona, Spain (Online), 2020.

We report on an application of universal dependencies for the study of diachronic shifts in syntactic usage patterns. Our focus is on the evolution of Scientific English in the Late Modern English period (ca. 1700-1900). Our data set is the Royal Society Corpus (RSC), comprising the full set of publications of the Royal Society of London between 1665 and 1996. Our starting assumption is that over time, Scientific English develops specific syntactic choice preferences that increase efficiency in (expert-to-expert) communication. The specific hypothesis we pursue in this paper is that changing syntactic choice preferences lead to greater dependency locality/dependency length minimization, which is associated with positive effects for the efficiency of human as well as computational linguistic processing. As a basis for our measurements, we parsed the RSC using Stanford CoreNLP. Overall, we observe a decrease in dependency length, with long dependency structures becoming less frequent and short dependency structures becoming more frequent over time, notably pertaining to the nominal phrase, thus marking an overall push towards greater communicative efficiency.

@inproceedings{juzek-etal-2020-exploring,
title = {Exploring diachronic syntactic shifts with dependency length: the case of scientific English},
author = {Tom Juzek and Marie-Pauline Krielke and Elke Teich},
url = {https://www.aclweb.org/anthology/2020.udw-1.13},
year = {2020},
date = {2020},
booktitle = {Proceedings of the Fourth Workshop on Universal Dependencies (UDW 2020)},
pages = {109-119},
publisher = {Association for Computational Linguistics},
address = {Barcelona, Spain (Online)},
abstract = {We report on an application of universal dependencies for the study of diachronic shifts in syntactic usage patterns. Our focus is on the evolution of Scientific English in the Late Modern English period (ca. 1700-1900). Our data set is the Royal Society Corpus (RSC), comprising the full set of publications of the Royal Society of London between 1665 and 1996. Our starting assumption is that over time, Scientific English develops specific syntactic choice preferences that increase efficiency in (expert-to-expert) communication. The specific hypothesis we pursue in this paper is that changing syntactic choice preferences lead to greater dependency locality/dependency length minimization, which is associated with positive effects for the efficiency of human as well as computational linguistic processing. As a basis for our measurements, we parsed the RSC using Stanford CoreNLP. Overall, we observe a decrease in dependency length, with long dependency structures becoming less frequent and short dependency structures becoming more frequent over time, notably pertaining to the nominal phrase, thus marking an overall push towards greater communicative efficiency.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B1

Teich, Elke

Language variation and change: A communicative perspective Miscellaneous

Jahrestagung der Deutschen Gesellschaft für Sprachwissenschaft, DGfS 2020, Hamburg, 2020.

It is widely acknowledged that language use and language structure are closely interlinked, linguistic structure emerging from language use (Bybee & Hopper 2001). Language use, in turn, is characterized by variation; in fact, speakers’ ability to adapt to changing contexts is a prerequisite for language to be functional (Weinreich et al. 1968).

Taking the perspective of rational communication, in my talk I will revisit some core questions of diachronic linguistic change: Why does a change happen? Which features are involved in change? How does change proceed? What are the eff ects of change? Recent work on online human language use reveals that speakers try to optimize their linguistic productions by encoding their messages with uniform information density (see Crocker et al. 2016 for an overview). Here, a major determinant in linguistic choice is predictability in context. Predictability in context is commonly represented by information content measured in bits (Shannon information): The more predictable a linguistic unit (e.g. word) is in a given context, the fewer bits are needed for encoding and the shorter its linguistic encoding may be (and vice versa, the more “surprising” a unit is in a given context, the more bits are needed for encoding and the more explicit its encoding tends to be). In this view, one major function of linguistic variation is to modulate information content so as to optimize message transmission.

In my talk, I apply this perspective to diachronic linguistic change. I show that speakers’ continuous adaptation to changing contextual conditions pushes towards linguistic innovation and results in temporary, high levels of expressivity, but the concern for maintaining communicative function pulls towards convergence and results in conventionalization. The diachronic scenario I discuss is mid-term change (200–250 years) in English in the late Modern period, focusing on the discourse domain of science (Degaetano-Ortlieb & Teich 2019). In terms of methods, I use computational language models to estimate predictability in context; and to assess diachronic change, I apply selected measures of information content, including entropy and surprisal.

@miscellaneous{Teich2020a,
title = {Language variation and change: A communicative perspective},
author = {Elke Teich},
url = {https://www.zfs.uni-hamburg.de/en/dgfs2020/programm/keynotes/elke-teich.html},
year = {2020},
date = {2020-11-04},
booktitle = {Jahrestagung der Deutschen Gesellschaft f{\"u}r Sprachwissenschaft, DGfS 2020},
address = {Hamburg},
abstract = {It is widely acknowledged that language use and language structure are closely interlinked, linguistic structure emerging from language use (Bybee & Hopper 2001). Language use, in turn, is characterized by variation; in fact, speakers’ ability to adapt to changing contexts is a prerequisite for language to be functional (Weinreich et al. 1968). Taking the perspective of rational communication, in my talk I will revisit some core questions of diachronic linguistic change: Why does a change happen? Which features are involved in change? How does change proceed? What are the eff ects of change? Recent work on online human language use reveals that speakers try to optimize their linguistic productions by encoding their messages with uniform information density (see Crocker et al. 2016 for an overview). Here, a major determinant in linguistic choice is predictability in context. Predictability in context is commonly represented by information content measured in bits (Shannon information): The more predictable a linguistic unit (e.g. word) is in a given context, the fewer bits are needed for encoding and the shorter its linguistic encoding may be (and vice versa, the more “surprising” a unit is in a given context, the more bits are needed for encoding and the more explicit its encoding tends to be). In this view, one major function of linguistic variation is to modulate information content so as to optimize message transmission. In my talk, I apply this perspective to diachronic linguistic change. I show that speakers’ continuous adaptation to changing contextual conditions pushes towards linguistic innovation and results in temporary, high levels of expressivity, but the concern for maintaining communicative function pulls towards convergence and results in conventionalization. The diachronic scenario I discuss is mid-term change (200–250 years) in English in the late Modern period, focusing on the discourse domain of science (Degaetano-Ortlieb & Teich 2019). In terms of methods, I use computational language models to estimate predictability in context; and to assess diachronic change, I apply selected measures of information content, including entropy and surprisal.},
note = {Key note},
pubstate = {published},
type = {miscellaneous}
}

Copy BibTeX to Clipboard

Project:   B1

Ortmann, Katrin; Dipper, Stefanie

Automatic Orality Identification in Historical Texts Inproceedings

Proceedings of The 12th Language Resources and Evaluation Conference (LREC), European Language Resources Association, pp. 1293-1302, Marseille, France, 2020.

Independently of the medial representation (written/spoken), language can exhibit characteristics of conceptual orality or literacy, which mainly manifest themselves on the lexical or syntactic level. In this paper we aim at automatically identifying conceptually-oral historical texts, with the ultimate goal of gaining knowledge about spoken data of historical time stages.

We apply a set of general linguistic features that have been proven to be effective for the classification of modern language data to historical German texts from various registers. Many of the features turn out to be equally useful in determining the conceptuality of historical data as they are for modern data, especially the frequency of different types of pronouns and the ratio of verbs to nouns. Other features like sentence length, particles or interjections point to peculiarities of the historical data and reveal problems with the adoption of a feature set that was developed on modern language data.

@inproceedings{Ortmann2020,
title = {Automatic Orality Identification in Historical Texts},
author = {Katrin Ortmann and Stefanie Dipper},
url = {https://www.aclweb.org/anthology/2020.lrec-1.162/},
year = {2020},
date = {2020},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference (LREC)},
pages = {1293-1302},
publisher = {European Language Resources Association},
address = {Marseille, France},
abstract = {Independently of the medial representation (written/spoken), language can exhibit characteristics of conceptual orality or literacy, which mainly manifest themselves on the lexical or syntactic level. In this paper we aim at automatically identifying conceptually-oral historical texts, with the ultimate goal of gaining knowledge about spoken data of historical time stages. We apply a set of general linguistic features that have been proven to be effective for the classification of modern language data to historical German texts from various registers. Many of the features turn out to be equally useful in determining the conceptuality of historical data as they are for modern data, especially the frequency of different types of pronouns and the ratio of verbs to nouns. Other features like sentence length, particles or interjections point to peculiarities of the historical data and reveal problems with the adoption of a feature set that was developed on modern language data.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   C6

Stenger, Irina; Avgustinova, Tania

How intelligible is spoken Bulgarian for Russian native speakers in an intercomprehension scenario? Inproceedings

Micheva, Vanya et al. (Ed.): Proceedings of the International Annual Conference of the Institute for Bulgarian Language, 2, pp. 142-151, Sofia, Bulgaria, 2020.

In a web-based experiment, Bulgarian audio stimuli in the form of recorded isolated words are presented to Russian native speakers who are required to write a suitable Russian translation. The degree of intelligibility, as revealed by the cognate guessing task, is relatively high for this pair of languages. We correlate the obtained intercomprehension scores with established linguistic factors in order to determine their influence on the cross-linguistic spoken word recognition. A detailed error analysis focuses on sound correspondences that cause translation problems in such an intercomprehension scenario.

@inproceedings{Stenger2020b,
title = {How intelligible is spoken Bulgarian for Russian native speakers in an intercomprehension scenario?},
author = {Irina Stenger and Tania Avgustinova},
editor = {Vanya Micheva et al.},
year = {2020},
date = {2020},
booktitle = {Proceedings of the International Annual Conference of the Institute for Bulgarian Language},
pages = {142-151},
address = {Sofia, Bulgaria},
abstract = {In a web-based experiment, Bulgarian audio stimuli in the form of recorded isolated words are presented to Russian native speakers who are required to write a suitable Russian translation. The degree of intelligibility, as revealed by the cognate guessing task, is relatively high for this pair of languages. We correlate the obtained intercomprehension scores with established linguistic factors in order to determine their influence on the cross-linguistic spoken word recognition. A detailed error analysis focuses on sound correspondences that cause translation problems in such an intercomprehension scenario.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   C4

Avgustinova, Tania; Stenger, Irina

Russian-Bulgarian mutual intelligibility in light of linguistic and statistical models of Slavic receptive multilingualism [Russko-bolgarskaja vzaimoponjatnost’ v svete lingvističeskich i statističeskich modelej slavjanskoj receptivnoj mnogojazyčnocsti] Book Chapter

Marti, Roland; Pognan, Patrice; Schlamberger Brezar, Mojca (Ed.): University Press, Faculty of Arts, pp. 85-99, Ljubljana, Slovenia, 2020.

Computational modelling of the observed mutual intelligibility of Slavic languages unavoid-ably requires systematic integration of classical Slavistics knowledge from comparative his-torical grammar and traditional contrastive description of language pairs. The phenomenon of intercomprehension is quite intuitive: speakers of a given language L1 understand another closely related language (variety) L2 without being able to use the latter productively, i.e. for speaking or writing.

This specific mode of using the human linguistic competence manifests itself as receptive multilingualism. The degree of mutual understanding of genetically close-ly related languages, such as Bulgarian and Russian, corresponds to objectively measurable distances at different linguistic levels. The common Slavic basis and the comparative-syn-chronous perspective allow us to reveal Bulgarian-Russian linguistic affinity with regard to spelling, vocabulary and grammar.

@inbook{Avgustinova2020,
title = {Russian-Bulgarian mutual intelligibility in light of linguistic and statistical models of Slavic receptive multilingualism [Russko-bolgarskaja vzaimoponjatnost’ v svete lingvisti{\v{c}eskich i statisti{\v{c}eskich modelej slavjanskoj receptivnoj mnogojazy{\v{c}nocsti]},
author = {Tania Avgustinova and Irina Stenger},
editor = {Roland Marti and Patrice Pognan and Mojca Schlamberger Brezar},
url = {https://e-knjige.ff.uni-lj.si/znanstvena-zalozba/catalog/view/226/326/5284-1},
year = {2020},
date = {2020},
pages = {85-99},
publisher = {University Press, Faculty of Arts},
address = {Ljubljana, Slovenia},
abstract = {Computational modelling of the observed mutual intelligibility of Slavic languages unavoid-ably requires systematic integration of classical Slavistics knowledge from comparative his-torical grammar and traditional contrastive description of language pairs. The phenomenon of intercomprehension is quite intuitive: speakers of a given language L1 understand another closely related language (variety) L2 without being able to use the latter productively, i.e. for speaking or writing. This specific mode of using the human linguistic competence manifests itself as receptive multilingualism. The degree of mutual understanding of genetically close-ly related languages, such as Bulgarian and Russian, corresponds to objectively measurable distances at different linguistic levels. The common Slavic basis and the comparative-syn-chronous perspective allow us to reveal Bulgarian-Russian linguistic affinity with regard to spelling, vocabulary and grammar.},
pubstate = {published},
type = {inbook}
}

Copy BibTeX to Clipboard

Project:   C4

Stenger, Irina; Avgustinova, Tania

Visual vs. auditory perception of Bulgarian stimuli by Russian native speakers Inproceedings

P. Selegej, Vladimir et al. (Ed.): Computational Linguistics and Intellectual Technologies: Papers from the Annual International Conference ‘Dialogue’, pp. 684 - 695, 2020.

This study contributes to a better understanding of receptive multilingualism by determining similarities and differences in successful processing of written and spoken cognate words in an unknown but (closely) related language. We investigate two Slavic languages with regard to their mutual intelligibility. The current focus is on the recognition of isolated Bulgarian words by Russian native speakers in a cognate guessing task, considering both written and audio stimuli.

The experimentally obtained intercomprehension scores show a generally high degree of intelligibility of Bulgarian cognates to Russian subjects, as well as processing difficulties in case of visual vs. auditory perception. In search of an explanation, we examine the linguistic factors that can contribute to various degrees of written and spoken word intelligibility. The intercomprehension scores obtained in the online word translation experiments are correlated with (i) the identical and mismatched correspondences on the orthographic and phonetic level, (ii) the word length of the stimuli, and (iii) the frequency of Russian cognates. Additionally we validate two measuring methods: the Levenshtein distance and the word adaptation surprisal as potential predictors of the word intelligibility in reading and oral intercomprehension.

@inproceedings{Stenger2020b,
title = {Visual vs. auditory perception of Bulgarian stimuli by Russian native speakers},
author = {Irina Stenger and Tania Avgustinova},
editor = {Vladimir P. Selegej et al.},
url = {http://www.dialog-21.ru/media/4962/stengeriplusavgustinovat-045.pdf},
year = {2020},
date = {2020},
booktitle = {Computational Linguistics and Intellectual Technologies: Papers from the Annual International Conference ‘Dialogue’},
pages = {684 - 695},
abstract = {This study contributes to a better understanding of receptive multilingualism by determining similarities and differences in successful processing of written and spoken cognate words in an unknown but (closely) related language. We investigate two Slavic languages with regard to their mutual intelligibility. The current focus is on the recognition of isolated Bulgarian words by Russian native speakers in a cognate guessing task, considering both written and audio stimuli. The experimentally obtained intercomprehension scores show a generally high degree of intelligibility of Bulgarian cognates to Russian subjects, as well as processing difficulties in case of visual vs. auditory perception. In search of an explanation, we examine the linguistic factors that can contribute to various degrees of written and spoken word intelligibility. The intercomprehension scores obtained in the online word translation experiments are correlated with (i) the identical and mismatched correspondences on the orthographic and phonetic level, (ii) the word length of the stimuli, and (iii) the frequency of Russian cognates. Additionally we validate two measuring methods: the Levenshtein distance and the word adaptation surprisal as potential predictors of the word intelligibility in reading and oral intercomprehension.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   C4

Avgustinova, Tania; Jágrová, Klára; Stenger, Irina

The INCOMSLAV Platform: Experimental Website with Integrated Methods for Measuring Linguistic Distances and Asymmetries in Receptive Multilingualism Inproceedings

Fiumara, James; Cieri, Christopher; Liberman, Mark; Callison-Burch, Chris (Ed.): LREC 2020 Workshop Language Resources and Evaluation Conference 11-16 May 2020, Citizen Linguistics in Language Resource Development (CLLRD 2020), Peter Lang, pp. 483-500, 2020.

We report on a web-based resource for conducting intercomprehension experiments with native speakers of Slavic languages and present our methods for measuring linguistic distances and asymmetries in receptive multilingualism. Through a website which serves as a platform for online testing, a large number of participants with different linguistic backgrounds can be targeted. A statistical language model is used to measure information density and to gauge how language users master various degrees of (un)intelligibilty. The key idea is that intercomprehension should be better when the model adapted for understanding the unknown language exhibits relatively low average distance and surprisal. All obtained intelligibility scores together with distance and asymmetry measures for the different language pairs and processing directions are made available as an integrated online resource in the form of a Slavic intercomprehension matrix (SlavMatrix).

@inproceedings{Stenger2020b,
title = {The INCOMSLAV Platform: Experimental Website with Integrated Methods for Measuring Linguistic Distances and Asymmetries in Receptive Multilingualism},
author = {Tania Avgustinova and Kl{\'a}ra J{\'a}grov{\'a} and Irina Stenger},
editor = {James Fiumara and Christopher Cieri and Mark Liberman and Chris Callison-Burch},
url = {https://aclanthology.org/2020.cllrd-1.6/},
doi = {https://doi.org/10.3726/978-3-653-07147-4},
year = {2020},
date = {2020},
booktitle = {LREC 2020 Workshop Language Resources and Evaluation Conference 11-16 May 2020, Citizen Linguistics in Language Resource Development (CLLRD 2020)},
pages = {483-500},
publisher = {Peter Lang},
abstract = {We report on a web-based resource for conducting intercomprehension experiments with native speakers of Slavic languages and present our methods for measuring linguistic distances and asymmetries in receptive multilingualism. Through a website which serves as a platform for online testing, a large number of participants with different linguistic backgrounds can be targeted. A statistical language model is used to measure information density and to gauge how language users master various degrees of (un)intelligibilty. The key idea is that intercomprehension should be better when the model adapted for understanding the unknown language exhibits relatively low average distance and surprisal. All obtained intelligibility scores together with distance and asymmetry measures for the different language pairs and processing directions are made available as an integrated online resource in the form of a Slavic intercomprehension matrix (SlavMatrix).},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   C4

Stenger, Irina; Jágrová, Klára; Fischer, Andrea; Avgustinova, Tania

“Reading Polish with Czech Eyes” or “How Russian Can a Bulgarian Text Be?”: Orthographic Differences as an Experimental Variable in Slavic Intercomprehension Incollection

Radeva-Bork, Teodora; Kosta, Peter (Ed.): Current Developments in Slavic Linguistics. Twenty Years After (based on selected papers from FDSL 11), Peter Lang, pp. 483-500, 2020.

@incollection{Stenger2020,
title = {“Reading Polish with Czech Eyes” or “How Russian Can a Bulgarian Text Be?”: Orthographic Differences as an Experimental Variable in Slavic Intercomprehension},
author = {Irina Stenger and Kl{\'a}ra J{\'a}grov{\'a} and Andrea Fischer and Tania Avgustinova},
editor = {Teodora Radeva-Bork and Peter Kosta},
url = {https://www.peterlang.com/view/title/19540},
doi = {https://doi.org/10.3726/978-3-653-07147-4},
year = {2020},
date = {2020},
booktitle = {Current Developments in Slavic Linguistics. Twenty Years After (based on selected papers from FDSL 11)},
pages = {483-500},
publisher = {Peter Lang},
pubstate = {published},
type = {incollection}
}

Copy BibTeX to Clipboard

Project:   C4

Jachmann, Torsten

The immediate influence of speaker gaze on situated speech comprehension : evidence from multiple ERP components PhD Thesis

Saarland University, Saarbruecken, Germany, 2020.

This thesis presents results from three ERP experiments on the influence of speaker gaze on listeners’ sentence comprehension with focus on the utilization of speaker gaze as part of the communicative signal. The first two experiments investigated whether speaker gaze was utilized in situated communication to form expectations about upcoming referents in an unfolding sentence. Participants were presented with a face performing gaze actions toward three objects surrounding it time aligned to utterances that compared two of the three objects.

Participants were asked to judge whether the sentence they heard was true given the provided scene. Gaze cues preceded the naming of the corresponding object by 800ms. The gaze cue preceding the mentioning of the second object was manipulated such that it was either Congruent, Incongruent or Uninformative (Averted toward an empty position in experiment 1 and Mutual (redirected toward the listener) in Experiment 2). The results showed that speaker gaze was used to form expectations about the unfolding sentence indicated by three observed ERP components that index different underlying mechanisms of language comprehension: an increased Phonological Mapping Negativity (PMN) was observed when an unexpected (Incongruent) or unpredictable (Uninformative) phoneme is encountered. The retrieval of a referent’s semantics was indexed by an N400 effect in response to referents following both Incongruent and Uninformative gaze. Additionally, an increased P600 response was present only for preceding Incongruent gaze, indexing the revision process of the mental representation of the situation. The involvement of these mechanisms has been supported by the findings of the third experiment, in which linguistic content was presented to serve as a predictive cue for subsequent speaker gaze. In this experiment the sentence structure enabled participants to anticipate upcoming referents based on the preceding linguistic content. Thus, gaze cues preceding the mentioning of the referent could also be anticipated.

The results showed the involvement of the same mechanisms as in the first two experiments on the referent itself, only when preceding gaze was absent. In the presence of object-directed gaze, while there were no longer significant effects on the referent itself, effects of semantic retrieval (N400) and integration with sentence meaning (P3b) were found on the gaze cue. Effects in the P3b (Gaze) and P600 (Referent) time-window further provided support for the presence of a mechanism of monitoring of the mental representation of the situation that subsumes the integration into that representation: A positive deflection was found whenever the communicative signal completed the mental representation such that an evaluation of that representation was possible. Taken together, the results provide support for the view that speaker gaze, in situated communication, is interpreted as part of the communicative signal and incrementally used to inform the mental representation of the situation simultaneously with the linguistic signal and that the mental representation is utilized to generate expectations about upcoming referents in an unfolding utterance.

@phdthesis{Jachmann2020,
title = {The immediate influence of speaker gaze on situated speech comprehension : evidence from multiple ERP components},
author = {Torsten Jachmann},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:291--ds-313090},
doi = {https://doi.org/10.22028/D291-31309},
year = {2020},
date = {2020},
school = {Saarland University},
address = {Saarbruecken, Germany},
abstract = {This thesis presents results from three ERP experiments on the influence of speaker gaze on listeners’ sentence comprehension with focus on the utilization of speaker gaze as part of the communicative signal. The first two experiments investigated whether speaker gaze was utilized in situated communication to form expectations about upcoming referents in an unfolding sentence. Participants were presented with a face performing gaze actions toward three objects surrounding it time aligned to utterances that compared two of the three objects. Participants were asked to judge whether the sentence they heard was true given the provided scene. Gaze cues preceded the naming of the corresponding object by 800ms. The gaze cue preceding the mentioning of the second object was manipulated such that it was either Congruent, Incongruent or Uninformative (Averted toward an empty position in experiment 1 and Mutual (redirected toward the listener) in Experiment 2). The results showed that speaker gaze was used to form expectations about the unfolding sentence indicated by three observed ERP components that index different underlying mechanisms of language comprehension: an increased Phonological Mapping Negativity (PMN) was observed when an unexpected (Incongruent) or unpredictable (Uninformative) phoneme is encountered. The retrieval of a referent’s semantics was indexed by an N400 effect in response to referents following both Incongruent and Uninformative gaze. Additionally, an increased P600 response was present only for preceding Incongruent gaze, indexing the revision process of the mental representation of the situation. The involvement of these mechanisms has been supported by the findings of the third experiment, in which linguistic content was presented to serve as a predictive cue for subsequent speaker gaze. In this experiment the sentence structure enabled participants to anticipate upcoming referents based on the preceding linguistic content. Thus, gaze cues preceding the mentioning of the referent could also be anticipated. The results showed the involvement of the same mechanisms as in the first two experiments on the referent itself, only when preceding gaze was absent. In the presence of object-directed gaze, while there were no longer significant effects on the referent itself, effects of semantic retrieval (N400) and integration with sentence meaning (P3b) were found on the gaze cue. Effects in the P3b (Gaze) and P600 (Referent) time-window further provided support for the presence of a mechanism of monitoring of the mental representation of the situation that subsumes the integration into that representation: A positive deflection was found whenever the communicative signal completed the mental representation such that an evaluation of that representation was possible. Taken together, the results provide support for the view that speaker gaze, in situated communication, is interpreted as part of the communicative signal and incrementally used to inform the mental representation of the situation simultaneously with the linguistic signal and that the mental representation is utilized to generate expectations about upcoming referents in an unfolding utterance.},
pubstate = {published},
type = {phdthesis}
}

Copy BibTeX to Clipboard

Project:   C3

Meier, David; Andreeva, Bistra

Einflussfaktoren auf die Wahrnehmung von Prominenz im natürlichen Dialog Inproceedings

Elektronische Sprachsignalverarbeitung 2020, Tagungsband der 31. Konferenz , pp. 257-264, Magdeburg, 2020.

Turnbull et al. [1] stellen fest, dass sich auf die Wahrnehmung der prosodischen Prominenz von isolierten Adjektiv-Nomen-Paaren mehrere konkurrierende Faktoren auswirken, nämlich die Phonologie, der Diskurskontext und das Wissen über den Diskurs. Der vorliegende Beitrag hat das Ziel, den relativen Einfluss der evozierten Fokussierung (eng kontrastiv vs. weit kontrastiv) und der Akzentuierung (akzentuiert vs. nicht akzentuiert) auf die Wahrnehmung von Prominenz zu untersuchen und zu überprüfen, ob die in Turnbull et al. vorgestellten Konzepte in einer Umgebung reproduzierbar sind, die eher mit einem natürlichsprachlichen Dialog vergleichbar ist. Für die Studie wurden 144 realisierte Sätze eines einzelnen männlichen Sprechers so zusammengeschnitten, dass ein semantischer Kontrast entweder auf dem betreffenden Nomen oder auf dem Adjektiv entsteht. Die metrisch starken Silben des Adjektivs oder des Nomens waren entweder entsprechend der Fokusstruktur oder gegen Erwartung akzentuiert. Die Ergebnisse zeigen, dass die Akzentuierung einen größeren Einfluss auf die Prominenzwahrnehmung als die Fokusbedingung hat, was im Einklang mit den Ergebnissen von Turnbull et al. ist. Adjektive werden zudem konsequent als prominenter eingestuft als Nomen in vergleichbaren Kontexten. Eine Erweiterung des Diskurskontextes und der Hintergrundinformationen, die dem Versuchsteilnehmer zur Verfügung standen, haben in dem hier vorgestellten Versuchsaufbau allerdings nur vernachlässigbare Effekte.

@inproceedings{Meier2020,
title = {Einflussfaktoren auf die Wahrnehmung von Prominenz im nat{\"u}rlichen Dialog},
author = {David Meier and Bistra Andreeva},
url = {https://www.essv.de/paper.php?id=465},
year = {2020},
date = {2020},
booktitle = {Elektronische Sprachsignalverarbeitung 2020, Tagungsband der 31. Konferenz},
pages = {257-264},
address = {Magdeburg},
abstract = {Turnbull et al. [1] stellen fest, dass sich auf die Wahrnehmung der prosodischen Prominenz von isolierten Adjektiv-Nomen-Paaren mehrere konkurrierende Faktoren auswirken, n{\"a}mlich die Phonologie, der Diskurskontext und das Wissen {\"u}ber den Diskurs. Der vorliegende Beitrag hat das Ziel, den relativen Einfluss der evozierten Fokussierung (eng kontrastiv vs. weit kontrastiv) und der Akzentuierung (akzentuiert vs. nicht akzentuiert) auf die Wahrnehmung von Prominenz zu untersuchen und zu {\"u}berpr{\"u}fen, ob die in Turnbull et al. vorgestellten Konzepte in einer Umgebung reproduzierbar sind, die eher mit einem nat{\"u}rlichsprachlichen Dialog vergleichbar ist. F{\"u}r die Studie wurden 144 realisierte S{\"a}tze eines einzelnen m{\"a}nnlichen Sprechers so zusammengeschnitten, dass ein semantischer Kontrast entweder auf dem betreffenden Nomen oder auf dem Adjektiv entsteht. Die metrisch starken Silben des Adjektivs oder des Nomens waren entweder entsprechend der Fokusstruktur oder gegen Erwartung akzentuiert. Die Ergebnisse zeigen, dass die Akzentuierung einen gr{\"o}{\ss}eren Einfluss auf die Prominenzwahrnehmung als die Fokusbedingung hat, was im Einklang mit den Ergebnissen von Turnbull et al. ist. Adjektive werden zudem konsequent als prominenter eingestuft als Nomen in vergleichbaren Kontexten. Eine Erweiterung des Diskurskontextes und der Hintergrundinformationen, die dem Versuchsteilnehmer zur Verf{\"u}gung standen, haben in dem hier vorgestellten Versuchsaufbau allerdings nur vernachl{\"a}ssigbare Effekte.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   C1

Successfully