Publications

Scholman, Merel; Rohde, Hannah; Demberg, Vera

"On the one hand" as a cue to anticipate upcoming discourse structure Journal Article

Journal of Memory and Language, 97, pp. 47-60, 2017.

Research has shown that people anticipate upcoming linguistic content, but most work to date has focused on relatively short-range expectation-driven processes within the current sentence or between adjacent sentences. We use the discourse marker On the one hand to test whether comprehenders maintain expectations regarding upcoming content in discourse representations that span multiple sentences. Three experiments show that comprehenders anticipate more than just On the other hand; rather, they keep track of embedded constituents and establish non-local dependencies. Our results show that comprehenders disprefer a subsequent contrast marked with On the other hand when a passage has already provided intervening content that establishes an appropriate contrast with On the one hand. Furthermore, comprehenders maintain their expectation for an upcoming contrast across intervening material, even if the embedded constituent itself contains contrast. The results are taken to support expectation-driven models of processing in which comprehenders posit and maintain structural representations of discourse structure.

@article{Merel2017,
title = {"On the one hand" as a cue to anticipate upcoming discourse structure},
author = {Merel Scholman and Hannah Rohde and Vera Demberg},
url = {https://www.sciencedirect.com/science/article/pii/S0749596X17300566},
year = {2017},
date = {2017},
journal = {Journal of Memory and Language},
pages = {47-60},
volume = {97},
abstract = {

Research has shown that people anticipate upcoming linguistic content, but most work to date has focused on relatively short-range expectation-driven processes within the current sentence or between adjacent sentences. We use the discourse marker On the one hand to test whether comprehenders maintain expectations regarding upcoming content in discourse representations that span multiple sentences. Three experiments show that comprehenders anticipate more than just On the other hand; rather, they keep track of embedded constituents and establish non-local dependencies. Our results show that comprehenders disprefer a subsequent contrast marked with On the other hand when a passage has already provided intervening content that establishes an appropriate contrast with On the one hand. Furthermore, comprehenders maintain their expectation for an upcoming contrast across intervening material, even if the embedded constituent itself contains contrast. The results are taken to support expectation-driven models of processing in which comprehenders posit and maintain structural representations of discourse structure.

},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   B2

Scholman, Merel; Demberg, Vera

Examples and specifications that prove a point: Distinguishing between elaborative and argumentative discourse relations Journal Article

Dialogue and Discourse, 8, pp. 53-86, 2017.
Examples and specifications occur frequently in text, but not much is known about how how readers interpret them. Looking at how they’re annotated in existing discourse corpora, we find that anno-tators often disagree on these types of relations; specifically, there is disagreement about whether these relations are elaborative (additive) or argumentative (pragmatic causal). To investigate how readers interpret examples and specifications, we conducted a crowdsourced discourse annotation study. The results show that these relations can indeed have two functions: they can be used to both illustrate / specify a situation and serve as an argument for a claim. These findings suggest that examples and specifications can have multiple simultaneous readings. We discuss the implications of these results for discourse annotation.

@article{Scholman2017,
title = {Examples and specifications that prove a point: Distinguishing between elaborative and argumentative discourse relations},
author = {Merel Scholman and Vera Demberg},
url = {https://www.researchgate.net/publication/318569668_Examples_and_Specifications_that_Prove_a_Point_Identifying_Elaborative_and_Argumentative_Discourse_Relations},
year = {2017},
date = {2017},
journal = {Dialogue and Discourse},
pages = {53-86},
volume = {8},
number = {2},
abstract = {

Examples and specifications occur frequently in text, but not much is known about how how readers interpret them. Looking at how they're annotated in existing discourse corpora, we find that anno-tators often disagree on these types of relations; specifically, there is disagreement about whether these relations are elaborative (additive) or argumentative (pragmatic causal). To investigate how readers interpret examples and specifications, we conducted a crowdsourced discourse annotation study. The results show that these relations can indeed have two functions: they can be used to both illustrate / specify a situation and serve as an argument for a claim. These findings suggest that examples and specifications can have multiple simultaneous readings. We discuss the implications of these results for discourse annotation.
},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   B2

Scholman, Merel; Demberg, Vera

Crowdsourcing discourse interpretations: On the influence of context and the reliability of a connective insertion task Inproceedings

Proceedings of the 11th Linguistic Annotation Workshop, Association for Computational Linguistics, pp. 24-33, Valencia, Spain, 2017.

Traditional discourse annotation tasks are considered costly and time-consuming, and the reliability and validity of these tasks is in question. In this paper, we investigate whether crowdsourcing can be used to obtain reliable discourse relation annotations. We also examine the influence of context on the reliability of the data. The results of a crowdsourced connective insertion task showed that the method can be used to obtain reliable annotations: The majority of the inserted connectives converged with the original label. Further, the method is sensitive to the fact that multiple senses can often be inferred for a single relation. Regarding the presence of context, the results show no significant difference in distributions of insertions between conditions overall. However, a by-item comparison revealed several characteristics of segments that determine whether the presence of context makes a difference in annotations. The findings discussed in this paper can be taken as evidence that crowdsourcing can be used as a valuable method to obtain insights into the sense(s) of relations.

@inproceedings{Scholman2017,
title = {Crowdsourcing discourse interpretations: On the influence of context and the reliability of a connective insertion task},
author = {Merel Scholman and Vera Demberg},
url = {https://aclanthology.org/W17-0803},
doi = {https://doi.org/10.18653/v1/W17-0803},
year = {2017},
date = {2017},
booktitle = {Proceedings of the 11th Linguistic Annotation Workshop},
pages = {24-33},
publisher = {Association for Computational Linguistics},
address = {Valencia, Spain},
abstract = {Traditional discourse annotation tasks are considered costly and time-consuming, and the reliability and validity of these tasks is in question. In this paper, we investigate whether crowdsourcing can be used to obtain reliable discourse relation annotations. We also examine the influence of context on the reliability of the data. The results of a crowdsourced connective insertion task showed that the method can be used to obtain reliable annotations: The majority of the inserted connectives converged with the original label. Further, the method is sensitive to the fact that multiple senses can often be inferred for a single relation. Regarding the presence of context, the results show no significant difference in distributions of insertions between conditions overall. However, a by-item comparison revealed several characteristics of segments that determine whether the presence of context makes a difference in annotations. The findings discussed in this paper can be taken as evidence that crowdsourcing can be used as a valuable method to obtain insights into the sense(s) of relations.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B2

Rutherford, Attapol; Demberg, Vera; Xue, Nianwen

A systematic study of neural discourse models for implicit discourse relation Inproceedings

Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long PapersDialogue and Discourse, Association for Computational Linguistics, pp. 281-291, Valencia, Spain, 2017.

Inferring implicit discourse relations in natural language text is the most difficult subtask in discourse parsing. Many neural network models have been proposed to tackle this problem. However, the comparison for this task is not unified, so we could hardly draw clear conclusions about the effectiveness of various architectures. Here, we propose neural network models that are based on feedforward and long-short term memory architecture and systematically study the effects of varying structures. To our surprise, the best-configured feedforward architecture outperforms LSTM-based model in most cases despite thorough tuning. Further, we compare our best feedforward system with competitive convolutional and recurrent networks and find that feedforward can actually be more effective. For the first time for this task, we compile and publish outputs from previous neural and non-neural systems to establish the standard for further comparison.

@inproceedings{Rutherford2017,
title = {A systematic study of neural discourse models for implicit discourse relation},
author = {Attapol Rutherford and Vera Demberg and Nianwen Xue},
url = {https://aclanthology.org/E17-1027},
year = {2017},
date = {2017},
booktitle = {Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers},
pages = {281-291},
publisher = {Association for Computational Linguistics},
address = {Valencia, Spain},
abstract = {Inferring implicit discourse relations in natural language text is the most difficult subtask in discourse parsing. Many neural network models have been proposed to tackle this problem. However, the comparison for this task is not unified, so we could hardly draw clear conclusions about the effectiveness of various architectures. Here, we propose neural network models that are based on feedforward and long-short term memory architecture and systematically study the effects of varying structures. To our surprise, the best-configured feedforward architecture outperforms LSTM-based model in most cases despite thorough tuning. Further, we compare our best feedforward system with competitive convolutional and recurrent networks and find that feedforward can actually be more effective. For the first time for this task, we compile and publish outputs from previous neural and non-neural systems to establish the standard for further comparison.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B2

Evers-Vermeul, Jacqueline; Hoek, Jet; Scholman, Merel

On temporality in discourse annotation: Theoretical and practical considerations Journal Article

Dialogue and Discourse, 8, pp. 1-20, 2017.
Temporal information is one of the prominent features that determine the coherence in a discourse. That is why we need an adequate way to deal with this type of information during discourse annotation. In this paper, we will argue that temporal order is a relational rather than a segment-specific property, and that it is a cognitively plausible notion: temporal order is expressed in the system of linguistic markers and is relevant in both acquisition and language processing. This means that temporal relations meet the requirements set by the Cognitive approach of Coherence Relations (CCR) to be considered coherence relations, and that CCR would need a way to distinguish temporal relations within its annotation system. We will present merits and drawbacks of different options of reaching this objective and argue in favor of adding temporal order as a new dimension to CCR.

@article{Vermeul2017,
title = {On temporality in discourse annotation: Theoretical and practical considerations},
author = {Jacqueline Evers-Vermeul and Jet Hoek and Merel Scholman},
url = {https://journals.uic.edu/ojs/index.php/dad/article/view/10777},
doi = {https://doi.org/10.5087/dad.2017.201},
year = {2017},
date = {2017},
journal = {Dialogue and Discourse},
pages = {1-20},
volume = {8},
number = {2},
abstract = {

Temporal information is one of the prominent features that determine the coherence in a discourse. That is why we need an adequate way to deal with this type of information during discourse annotation. In this paper, we will argue that temporal order is a relational rather than a segment-specific property, and that it is a cognitively plausible notion: temporal order is expressed in the system of linguistic markers and is relevant in both acquisition and language processing. This means that temporal relations meet the requirements set by the Cognitive approach of Coherence Relations (CCR) to be considered coherence relations, and that CCR would need a way to distinguish temporal relations within its annotation system. We will present merits and drawbacks of different options of reaching this objective and argue in favor of adding temporal order as a new dimension to CCR.
},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   B2

Degaetano-Ortlieb, Stefania

Variation in language use across social variables: a data-driven approach Inproceedings

Proceedings of the Corpus and Language Variation in English Research Conference (CLAVIER), Bari, Italy, 2017.

We present a data-driven approach to study language use over time according to social variables (henceforth SV), considering also interactions between different variables. Besides sociolinguistic studies on language variation according to SVs (e.g., Weinreich et al. 1968, Bernstein 1971, Eckert 1989, Milroy and Milroy 1985), recently computational approaches have gained prominence (see e.g., Eisenstein 2015, Danescu-Niculescu-Mizil et al. 2013, and Nguyen et al. 2017 for an overview), not least due to an increase in data availability based on social media and an increasing awareness of the importance of linguistic variation according to SVs in the NLP community.

@inproceedings{Degaetano-Ortlieb2017b,
title = {Variation in language use across social variables: a data-driven approach},
author = {Stefania Degaetano-Ortlieb},
url = {https://stefaniadegaetano.files.wordpress.com/2017/07/clavier2017_slingpro_accepted.pdf},
year = {2017},
date = {2017},
booktitle = {Proceedings of the Corpus and Language Variation in English Research Conference (CLAVIER)},
address = {Bari, Italy},
abstract = {We present a data-driven approach to study language use over time according to social variables (henceforth SV), considering also interactions between different variables. Besides sociolinguistic studies on language variation according to SVs (e.g., Weinreich et al. 1968, Bernstein 1971, Eckert 1989, Milroy and Milroy 1985), recently computational approaches have gained prominence (see e.g., Eisenstein 2015, Danescu-Niculescu-Mizil et al. 2013, and Nguyen et al. 2017 for an overview), not least due to an increase in data availability based on social media and an increasing awareness of the importance of linguistic variation according to SVs in the NLP community.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B1

Degaetano-Ortlieb, Stefania; Menzel, Katrin; Teich, Elke

The course of grammatical change in scientific writing: Interdependency between convention and productivity Inproceedings

Proceedings of the Corpus and Language Variation in English Research Conference (CLAVIER), Bari, Italy, 2017.

We present an empirical approach to analyze the course of usage change in scientific writing. A great amount of linguistic research has dealt with grammatical changes, showing their gradual course of change, which nearly always progresses stepwise (see e.g. Bybee et al. 1994, Hopper and Traugott 2003, Lee 2011, De Smet and Van de Velde 2013). Less well understood is under which conditions these changes occur. According to De Smet (2016), specific expressions increase in frequency in one grammatical context, adopting a more conventionalized use, which in turn makes them available in closely related grammatical contexts.

@inproceedings{Degaetano-Ortlieb2017b,
title = {The course of grammatical change in scientific writing: Interdependency between convention and productivity},
author = {Stefania Degaetano-Ortlieb and Katrin Menzel and Elke Teich},
url = {https://stefaniadegaetano.files.wordpress.com/2017/07/clavier2017-degaetano-etal_accepted_final.pdf},
year = {2017},
date = {2017},
booktitle = {Proceedings of the Corpus and Language Variation in English Research Conference (CLAVIER)},
address = {Bari, Italy},
abstract = {We present an empirical approach to analyze the course of usage change in scientific writing. A great amount of linguistic research has dealt with grammatical changes, showing their gradual course of change, which nearly always progresses stepwise (see e.g. Bybee et al. 1994, Hopper and Traugott 2003, Lee 2011, De Smet and Van de Velde 2013). Less well understood is under which conditions these changes occur. According to De Smet (2016), specific expressions increase in frequency in one grammatical context, adopting a more conventionalized use, which in turn makes them available in closely related grammatical contexts.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B1

Menzel, Katrin; Degaetano-Ortlieb, Stefania

The diachronic development of combining forms in scientific writing Journal Article

Lege Artis. Language yesterday, today, tomorrow. The Journal of University of SS Cyril and Methodius in Trnava. Warsaw: De Gruyter Open, 2, pp. 185-249, 2017.
This paper addresses the diachronic development of combining forms in English scientific texts over approximately 350 years, from the early stages of the first scholarly journals that were published in English to contemporary English scientific publications. In this paper a critical discussion of the category of combining forms is presented and a case study is produced to examine the role of selected combining forms in two diachronic English corpora.

@article{Menzel2017,
title = {The diachronic development of combining forms in scientific writing},
author = {Katrin Menzel and Stefania Degaetano-Ortlieb},
url = {https://www.researchgate.net/publication/321776056_The_diachronic_development_of_combining_forms_in_scientific_writing},
year = {2017},
date = {2017},
journal = {Lege Artis. Language yesterday, today, tomorrow. The Journal of University of SS Cyril and Methodius in Trnava. Warsaw: De Gruyter Open},
pages = {185-249},
volume = {2},
number = {2},
abstract = {

This paper addresses the diachronic development of combining forms in English scientific texts over approximately 350 years, from the early stages of the first scholarly journals that were published in English to contemporary English scientific publications. In this paper a critical discussion of the category of combining forms is presented and a case study is produced to examine the role of selected combining forms in two diachronic English corpora.
},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   B1

Degaetano-Ortlieb, Stefania; Fischer, Stefan; Demberg, Vera; Teich, Elke

An information-theoretic account on the diachronic development of discourse connectors in scientific writing Inproceedings

39th DGfS AG1, Saarbrücken, Germany, 2017.

@inproceedings{Degaetano-Ortlieb2017b,
title = {An information-theoretic account on the diachronic development of discourse connectors in scientific writing},
author = {Stefania Degaetano-Ortlieb and Stefan Fischer and Vera Demberg and Elke Teich},
year = {2017},
date = {2017},
publisher = {39th DGfS AG1},
address = {Saarbr{\"u}cken, Germany},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B1

Knappen, Jörg; Fischer, Stefan; Kermes, Hannah; Teich, Elke; Fankhauser, Peter

The making of the Royal Society Corpus Inproceedings

21st Nordic Conference on Computational Linguistics (NoDaLiDa) Workshop on Processing Historical language, Workshop on Processing Historical language, pp. 7-11, Gothenburg, Sweden, 2017.
The Royal Society Corpus is a corpus of Early and Late modern English built in an agile process covering publications of the Royal Society of London from 1665 to 1869 (Kermes et al., 2016) with a size of approximately 30 million words. In this paper we will provide details on two aspects of the building process namely the mining of patterns for OCR correction and the improvement and evaluation of part-of-speech tagging.

@inproceedings{Knappen2017,
title = {The making of the Royal Society Corpus},
author = {J{\"o}rg Knappen and Stefan Fischer and Hannah Kermes and Elke Teich and Peter Fankhauser},
url = {https://www.researchgate.net/publication/331648134_The_Making_of_the_Royal_Society_Corpus},
year = {2017},
date = {2017},
booktitle = {21st Nordic Conference on Computational Linguistics (NoDaLiDa) Workshop on Processing Historical language},
pages = {7-11},
publisher = {Workshop on Processing Historical language},
address = {Gothenburg, Sweden},
abstract = {

The Royal Society Corpus is a corpus of Early and Late modern English built in an agile process covering publications of the Royal Society of London from 1665 to 1869 (Kermes et al., 2016) with a size of approximately 30 million words. In this paper we will provide details on two aspects of the building process namely the mining of patterns for OCR correction and the improvement and evaluation of part-of-speech tagging.
},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B1

Kermes, Hannah; Teich, Elke

Average surprisal of parts-of-speech Inproceedings

Corpus Linguistics 2017, Birmingham, UK, 2017.

We present an approach to investigate the differences between lexical words and function words and the respective parts-of-speech from an information-theoretical point of view (cf. Shannon, 1949). We use average surprisal (AvS) to measure the amount of information transmitted by a linguistic unit. We expect to find function words to be more predictable (having a lower AvS) and lexical words to be less predictable (having a higher AvS). We also assume that function words‘ AvS is fairly constant over time and registers, while AvS of lexical words is more variable depending on time and register.

@inproceedings{Kermes2017,
title = {Average surprisal of parts-of-speech},
author = {Hannah Kermes and Elke Teich},
url = {https://www.birmingham.ac.uk/Documents/college-artslaw/corpus/conference-archives/2017/general/paper207.pdf},
year = {2017},
date = {2017},
publisher = {Corpus Linguistics 2017},
address = {Birmingham, UK},
abstract = {We present an approach to investigate the differences between lexical words and function words and the respective parts-of-speech from an information-theoretical point of view (cf. Shannon, 1949). We use average surprisal (AvS) to measure the amount of information transmitted by a linguistic unit. We expect to find function words to be more predictable (having a lower AvS) and lexical words to be less predictable (having a higher AvS). We also assume that function words' AvS is fairly constant over time and registers, while AvS of lexical words is more variable depending on time and register.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B1

Degaetano-Ortlieb, Stefania; Teich, Elke

Modeling intra-textual variation with entropy and surprisal: Topical vs. stylistic patterns Inproceedings

Proceedings of the Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature, Association for Computational Linguistics, pp. 68-77, Vancouver, Canada, 2017.

We present a data-driven approach to investigate intra-textual variation by combining entropy and surprisal. With this approach we detect linguistic variation based on phrasal lexico-grammatical patterns across sections of research articles. Entropy is used to detect patterns typical of specific sections. Surprisal is used to differentiate between more and less informationally-loaded patterns as well as type of information (topical vs. stylistic). While we here focus on research articles in biology/genetics, the methodology is especially interesting for digital humanities scholars, as it can be applied to any text type or domain and combined with additional variables (e.g. time, author or social group).

@inproceedings{Degaetano-Ortlieb2017,
title = {Modeling intra-textual variation with entropy and surprisal: Topical vs. stylistic patterns},
author = {Stefania Degaetano-Ortlieb and Elke Teich},
url = {https://aclanthology.org/W17-2209},
doi = {https://doi.org/10.18653/v1/W17-2209},
year = {2017},
date = {2017},
booktitle = {Proceedings of the Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature},
pages = {68-77},
publisher = {Association for Computational Linguistics},
address = {Vancouver, Canada},
abstract = {We present a data-driven approach to investigate intra-textual variation by combining entropy and surprisal. With this approach we detect linguistic variation based on phrasal lexico-grammatical patterns across sections of research articles. Entropy is used to detect patterns typical of specific sections. Surprisal is used to differentiate between more and less informationally-loaded patterns as well as type of information (topical vs. stylistic). While we here focus on research articles in biology/genetics, the methodology is especially interesting for digital humanities scholars, as it can be applied to any text type or domain and combined with additional variables (e.g. time, author or social group).},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B1

Sekicki, Mirjana; Staudte, Maria

Cognitive load in the visual world: The facilitatory effect of gaze Miscellaneous

39th Annual Meeting of the Cognitive Science Society, London, UK, 2017.
  1. Does following a gaze cue influence the cognitive load required for processing the corresponding linguistic referent?
  2. Is considering the gaze cue costly? Is there a distribution of cognitive load between the cue and the referent?
  3. Can a gaze cue have a disruptive effect on processing the linguistic referent?

@miscellaneous{Sekicki2017,
title = {Cognitive load in the visual world: The facilitatory effect of gaze},
author = {Mirjana Sekicki and Maria Staudte},
year = {2017},
date = {2017},
publisher = {39th Annual Meeting of the Cognitive Science Society},
address = {London, UK},
abstract = {

  1. Does following a gaze cue influence the cognitive load required for processing the corresponding linguistic referent?
  2. Is considering the gaze cue costly? Is there a distribution of cognitive load between the cue and the referent?
  3. Can a gaze cue have a disruptive effect on processing the linguistic referent?
},
pubstate = {published},
type = {miscellaneous}
}

Copy BibTeX to Clipboard

Project:   A5

Vogels, Jorrig; Howcroft, David M.; Demberg, Vera

Referential overspecification in response to the listener's cognitive load Inproceedings

International Cognitive Linguistics Conference, Tarttu, Estonia, 2017.

According to the Uniform Information Density hypothesis (UID; Jaeger 2010, inter alia), speakers strive to distribute information equally over their utterances. They do this to avoid both peaks and troughs in information density, which may lead to processing difficulty for the listener. Several studies have shown how speakers consistently make linguistic choices that result in a more equal distribution of information (e.g., Jaeger 2010, Mahowald, Fedorenko, Piantadosi, & Gibson 2013, Piantadosi, Tily, & Gibson 2011). However, it is not clear whether speakers also adapt the information density of their utterances to the processing capacity of a specific addressee. For example, when the addressee is involved in a difficult task that is clearly reducing his cognitive capacity for processing linguistic information, will the speaker lower the overall information density of her utterances to accommodate the reduced processing capacity?

@inproceedings{Vogels2017,
title = {Referential overspecification in response to the listener's cognitive load},
author = {Jorrig Vogels and David M. Howcroft and Vera Demberg},
year = {2017},
date = {2017},
publisher = {International Cognitive Linguistics Conference},
address = {Tarttu, Estonia},
abstract = {According to the Uniform Information Density hypothesis (UID; Jaeger 2010, inter alia), speakers strive to distribute information equally over their utterances. They do this to avoid both peaks and troughs in information density, which may lead to processing difficulty for the listener. Several studies have shown how speakers consistently make linguistic choices that result in a more equal distribution of information (e.g., Jaeger 2010, Mahowald, Fedorenko, Piantadosi, & Gibson 2013, Piantadosi, Tily, & Gibson 2011). However, it is not clear whether speakers also adapt the information density of their utterances to the processing capacity of a specific addressee. For example, when the addressee is involved in a difficult task that is clearly reducing his cognitive capacity for processing linguistic information, will the speaker lower the overall information density of her utterances to accommodate the reduced processing capacity?},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A4

Häuser, Katja; Demberg, Vera; Kray, Jutta

Age-differences in recovery from prediction error: Evidence from a simulated driving and combined sentence verification task. Inproceedings

39th Annual Meeting of the Cognitive Science Society, 2017.

@inproceedings{Häuser2017,
title = {Age-differences in recovery from prediction error: Evidence from a simulated driving and combined sentence verification task.},
author = {Katja H{\"a}user and Vera Demberg and Jutta Kray},
year = {2017},
date = {2017-10-17},
publisher = {39th Annual Meeting of the Cognitive Science Society},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A4

Howcroft, David M.; Klakow, Dietrich; Demberg, Vera

The Extended SPaRKy Restaurant Corpus: Designing a Corpus with Variable Information Density Inproceedings

Proc. Interspeech 2017, pp. 3757-3761, 2017.

Natural language generation (NLG) systems rely on corpora for both hand-crafted approaches in a traditional NLG architecture and for statistical end-to-end (learned) generation systems. Limitations in existing resources, however, make it difficult to develop systems which can vary the linguistic properties of an utterance as needed. For example, when users’ attention is split between a linguistic and a secondary task such as driving, a generation system may need to reduce the information density of an utterance to compensate for the reduction in user attention. We introduce a new corpus in the restaurant recommendation and comparison domain, collected in a paraphrasing paradigm, where subjects wrote texts targeting either a general audience or an elderly family member. This design resulted in a corpus of more than 5000 texts which exhibit a variety of lexical and syntactic choices and differ with respect to average word & sentence length and surprisal. The corpus includes two levels of meaning representation: flat ‘semantic stacks’ for propositional content and Rhetorical Structure Theory (RST) relations between these propositions.

@inproceedings{Howcroft2017b,
title = {The Extended SPaRKy Restaurant Corpus: Designing a Corpus with Variable Information Density},
author = {David M. Howcroft and Dietrich Klakow and Vera Demberg},
url = {http://dx.doi.org/10.21437/Interspeech.2017-1555},
doi = {https://doi.org/10.21437/Interspeech.2017-1555},
year = {2017},
date = {2017-10-17},
booktitle = {Proc. Interspeech 2017},
pages = {3757-3761},
abstract = {Natural language generation (NLG) systems rely on corpora for both hand-crafted approaches in a traditional NLG architecture and for statistical end-to-end (learned) generation systems. Limitations in existing resources, however, make it difficult to develop systems which can vary the linguistic properties of an utterance as needed. For example, when users’ attention is split between a linguistic and a secondary task such as driving, a generation system may need to reduce the information density of an utterance to compensate for the reduction in user attention. We introduce a new corpus in the restaurant recommendation and comparison domain, collected in a paraphrasing paradigm, where subjects wrote texts targeting either a general audience or an elderly family member. This design resulted in a corpus of more than 5000 texts which exhibit a variety of lexical and syntactic choices and differ with respect to average word & sentence length and surprisal. The corpus includes two levels of meaning representation: flat ‘semantic stacks’ for propositional content and Rhetorical Structure Theory (RST) relations between these propositions.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A4

Howcroft, David M.; Vogels, Jorrig; Demberg, Vera

G-TUNA: a corpus of referring expressions in German, including duration information Inproceedings

Proceedings of the 10th International Conference on Natural Language Generation, Association for Computational Linguistics, pp. 149-153, Santiago de Compostela, Spain, 2017.

Corpora of referring expressions elicited from human participants in a controlled environment are an important resource for research on automatic referring expression generation. We here present G-TUNA, a new corpus of referring expressions for German. Using the furniture stimuli set developed for the TUNA and D-TUNA corpora, our corpus extends on these corpora by providing data collected in a simulated driving dual-task setting, and additionally provides exact duration annotations for the spoken referring expressions. This corpus will hence allow researchers to analyze the interaction between referring expression length and speech rate, under conditions where the listener is under high vs. low cognitive load.

@inproceedings{W17-3522,
title = {G-TUNA: a corpus of referring expressions in German, including duration information},
author = {David M. Howcroft and Jorrig Vogels and Vera Demberg},
url = {http://www.aclweb.org/anthology/W17-3522},
doi = {https://doi.org/10.18653/v1/W17-3522},
year = {2017},
date = {2017},
booktitle = {Proceedings of the 10th International Conference on Natural Language Generation},
pages = {149-153},
publisher = {Association for Computational Linguistics},
address = {Santiago de Compostela, Spain},
abstract = {Corpora of referring expressions elicited from human participants in a controlled environment are an important resource for research on automatic referring expression generation. We here present G-TUNA, a new corpus of referring expressions for German. Using the furniture stimuli set developed for the TUNA and D-TUNA corpora, our corpus extends on these corpora by providing data collected in a simulated driving dual-task setting, and additionally provides exact duration annotations for the spoken referring expressions. This corpus will hence allow researchers to analyze the interaction between referring expression length and speech rate, under conditions where the listener is under high vs. low cognitive load.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A4

Howcroft, David M.; Demberg, Vera

Psycholinguistic Models of Sentence Processing Improve Sentence Readability Ranking Inproceedings

Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, Association for Computational Linguistics, pp. 958-968, Valencia, Spain, 2017.

While previous research on readability has typically focused on document-level measures, recent work in areas such as natural language generation has pointed out the need of sentence-level readability measures. Much of psycholinguistics has focused for many years on processing measures that provide difficulty estimates on a word-by-word basis. However, these psycholinguistic measures have not yet been tested on sentence readability ranking tasks. In this paper, we use four psycholinguistic measures: idea density, surprisal, integration cost, and embedding depth to test whether these features are predictive of readability levels. We find that psycholinguistic features significantly improve performance by up to 3 percentage points over a standard document-level readability metric baseline.

@inproceedings{Howcroft2017,
title = {Psycholinguistic Models of Sentence Processing Improve Sentence Readability Ranking},
author = {David M. Howcroft and Vera Demberg},
url = {http://www.aclweb.org/anthology/E17-1090},
year = {2017},
date = {2017-10-17},
booktitle = {Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers},
pages = {958-968},
publisher = {Association for Computational Linguistics},
address = {Valencia, Spain},
abstract = {While previous research on readability has typically focused on document-level measures, recent work in areas such as natural language generation has pointed out the need of sentence-level readability measures. Much of psycholinguistics has focused for many years on processing measures that provide difficulty estimates on a word-by-word basis. However, these psycholinguistic measures have not yet been tested on sentence readability ranking tasks. In this paper, we use four psycholinguistic measures: idea density, surprisal, integration cost, and embedding depth to test whether these features are predictive of readability levels. We find that psycholinguistic features significantly improve performance by up to 3 percentage points over a standard document-level readability metric baseline.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A4

Roth, Michael; Thater, Stefan; Ostermann, Simon; Pinkal, Manfred

Aligning Script Events with Narrative Texts Inproceedings

Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (*SEM 2017), Association for Computational Linguistics, Vancouver, Canada, 2017.

Script knowledge plays a central role in text understanding and is relevant for a variety of downstream tasks. In this paper, we consider two recent datasets which provide a rich and general representation of script events in terms of paraphrase sets.

We introduce the task of mapping event mentions in narrative texts to such script event types, and present a model for this task that exploits rich linguistic representations as well as information on temporal ordering. The results of our experiments demonstrate that this complex task is indeed feasible.

@inproceedings{ostermann-EtAl:2017:starSEM,
title = {Aligning Script Events with Narrative Texts},
author = {Michael Roth and Stefan Thater andSimon Ostermann and Manfred Pinkal},
url = {http://www.aclweb.org/anthology/S17-1016},
year = {2017},
date = {2017-10-17},
booktitle = {Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (*SEM 2017)},
publisher = {Association for Computational Linguistics},
address = {Vancouver, Canada},
abstract = {Script knowledge plays a central role in text understanding and is relevant for a variety of downstream tasks. In this paper, we consider two recent datasets which provide a rich and general representation of script events in terms of paraphrase sets. We introduce the task of mapping event mentions in narrative texts to such script event types, and present a model for this task that exploits rich linguistic representations as well as information on temporal ordering. The results of our experiments demonstrate that this complex task is indeed feasible.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A3

Nguyen, Dai Quoc; Nguyen, Dat Quoc; Modi, Ashutosh; Thater, Stefan; Pinkal, Manfred

A Mixture Model for Learning Multi-Sense Word Embeddings Inproceedings

Association for Computational Linguistics, pp. 121-127, Vancouver, Canada, 2017.

Word embeddings are now a standard technique for inducing meaning representations for words. For getting good representations, it is important to take into account different senses of a word. In this paper, we propose a mixture model for learning multi-sense word embeddings.

Our model generalizes the previous works in that it allows to induce different weights of different senses of a word. The experimental results show that our model outperforms previous models on standard evaluation tasks.

@inproceedings{nguyen-EtAl:2017:starSEM,
title = {A Mixture Model for Learning Multi-Sense Word Embeddings},
author = {Dai Quoc Nguyen and Dat Quoc Nguyen and Ashutosh Modi and Stefan Thater and Manfred Pinkal},
url = {http://www.aclweb.org/anthology/S17-1015},
year = {2017},
date = {2017},
pages = {121-127},
publisher = {Association for Computational Linguistics},
address = {Vancouver, Canada},
abstract = {Word embeddings are now a standard technique for inducing meaning representations for words. For getting good representations, it is important to take into account different senses of a word. In this paper, we propose a mixture model for learning multi-sense word embeddings. Our model generalizes the previous works in that it allows to induce different weights of different senses of a word. The experimental results show that our model outperforms previous models on standard evaluation tasks.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Projects:   A2 A3

Successfully