Publications

Shi, Wei; Demberg, Vera

Learning to Explicitate Connectives with Seq2Seq Network for Implicit Discourse Relation Classification Inproceedings

In Proceedings of the 13th International Conference on Computational Semantics, Association for Computational Linguistics, pp. 188-199, Gothenburg, Sweden, 2019.

Implicit discourse relation classification is one of the most difficult steps in discourse parsing. The difficulty stems from the fact that the coherence relation must be inferred based on the content of the discourse relational arguments. Therefore, an effective encoding of the relational arguments is of crucial importance. We here propose a new model for implicit discourse relation classification, which consists of a classifier, and a sequence-to-sequence model which is trained to generate a representation of the discourse relational arguments by trying to predict the relational arguments including a suitable implicit connective. Training is possible because such implicit connectives have been annotated as part of the PDTB corpus. Along with a memory network, our model could generate more refined representations for the task. And on the now standard 11-way classification, our method outperforms the previous state of the art systems on the PDTB benchmark on multiple settings including cross validation.

@inproceedings{Shi2019b,
title = {Learning to Explicitate Connectives with Seq2Seq Network for Implicit Discourse Relation Classification},
author = {Wei Shi and Vera Demberg},
url = {https://aclanthology.org/W19-0416},
doi = {https://doi.org/10.18653/v1/W19-0416},
year = {2019},
date = {2019},
booktitle = {In Proceedings of the 13th International Conference on Computational Semantics},
pages = {188-199},
publisher = {Association for Computational Linguistics},
address = {Gothenburg, Sweden},
abstract = {Implicit discourse relation classification is one of the most difficult steps in discourse parsing. The difficulty stems from the fact that the coherence relation must be inferred based on the content of the discourse relational arguments. Therefore, an effective encoding of the relational arguments is of crucial importance. We here propose a new model for implicit discourse relation classification, which consists of a classifier, and a sequence-to-sequence model which is trained to generate a representation of the discourse relational arguments by trying to predict the relational arguments including a suitable implicit connective. Training is possible because such implicit connectives have been annotated as part of the PDTB corpus. Along with a memory network, our model could generate more refined representations for the task. And on the now standard 11-way classification, our method outperforms the previous state of the art systems on the PDTB benchmark on multiple settings including cross validation.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B2

Crible, Ludivine; Demberg, Vera

The effect of genre variation on the production and acceptability of underspecified discourse markers in English Inproceedings

20th DiscourseNet, Budapest, Hungary, 2018.

@inproceedings{Crible2018,
title = {The effect of genre variation on the production and acceptability of underspecified discourse markers in English},
author = {Ludivine Crible and Vera Demberg},
url = {https://dial.uclouvain.be/pr/boreal/object/boreal:192393},
year = {2018},
date = {2018},
publisher = {20th DiscourseNet},
address = {Budapest, Hungary},
abstract = {

},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B2

Yung, Frances Pik Yu; Demberg, Vera

Do speakers produce discourse connectives rationally? Inproceedings

Proceedings of the Eight Workshop on Cognitive Aspects of Computational Language Learning and Processing, Association for Computational Linguistics, pp. 6-16, Melbourne, Australia, 2018.

A number of different discourse connectives can be used to mark the same discourse relation, but it is unclear what factors affect connective choice. One recent account is the Rational Speech Acts theory, which predicts that speakers try to maximize the informativeness of an utterance such that the listener can interpret the intended meaning correctly. Existing prior work uses referential language games to test the rational account of speakers{‚} production of concrete meanings, such as identification of objects within a picture. Building on the same paradigm, we design a novel Discourse Continuation Game to investigate speakers{‚} production of abstract discourse relations. Experimental results reveal that speakers significantly prefer a more informative connective, in line with predictions of the RSA model.

@inproceedings{Yung2019b,
title = {Do speakers produce discourse connectives rationally?},
author = {Frances Pik Yu Yung and Vera Demberg},
url = {https://aclanthology.org/W18-2802},
doi = {https://doi.org/10.18653/v1/W18-2802},
year = {2018},
date = {2018},
booktitle = {Proceedings of the Eight Workshop on Cognitive Aspects of Computational Language Learning and Processing},
pages = {6-16},
publisher = {Association for Computational Linguistics},
address = {Melbourne, Australia},
abstract = {A number of different discourse connectives can be used to mark the same discourse relation, but it is unclear what factors affect connective choice. One recent account is the Rational Speech Acts theory, which predicts that speakers try to maximize the informativeness of an utterance such that the listener can interpret the intended meaning correctly. Existing prior work uses referential language games to test the rational account of speakers{'} production of concrete meanings, such as identification of objects within a picture. Building on the same paradigm, we design a novel Discourse Continuation Game to investigate speakers{'} production of abstract discourse relations. Experimental results reveal that speakers significantly prefer a more informative connective, in line with predictions of the RSA model.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B2

Sanders, Ted J. M.; Demberg, Vera; Hoek, Jet; Scholman, Merel; Torabi Asr, Fatemeh; Zufferey, Sandrine; Evers-Vermeul, Jacqueline

Unifying dimensions in coherence relations: How various annotation frameworks are related Journal Article

Corpus Linguistics and Linguistic Theory, 2018.

In this paper, we show how three often used and seemingly different discourse annotation frameworks – Penn Discourse Treebank (PDTB), Rhetorical Structure Theory (RST), and Segmented Discourse Representation Theory – can be related by using a set of unifying dimensions. These dimensions are taken from the Cognitive approach to Coherence Relations and combined with more fine-grained additional features from the frameworks themselves to yield a posited set of dimensions that can successfully map three frameworks. The resulting interface will allow researchers to find identical or at least closely related relations within sets of annotated corpora, even if they are annotated within different frameworks. Furthermore, we tested our unified dimension (UniDim) approach by comparing PDTB and RST annotations of identical newspaper texts and converting their original end label annotations of relations into the accompanying values per dimension. Subsequently, rates of overlap in the attributed values per dimension were analyzed. Results indicate that the proposed dimensions indeed create an interface that makes existing annotation systems “talk to each other.”

@article{Sanders2018,
title = {Unifying dimensions in coherence relations: How various annotation frameworks are related},
author = {Ted J. M. Sanders and Vera Demberg and Jet Hoek and Merel Scholman and Fatemeh Torabi Asr and Sandrine Zufferey and Jacqueline Evers-Vermeul},
url = {https://www.degruyter.com/document/doi/10.1515/cllt-2016-0078/html},
doi = {https://doi.org/10.1515/cllt-2016-0078},
year = {2018},
date = {2018-05-22},
journal = {Corpus Linguistics and Linguistic Theory},
abstract = {In this paper, we show how three often used and seemingly different discourse annotation frameworks – Penn Discourse Treebank (PDTB), Rhetorical Structure Theory (RST), and Segmented Discourse Representation Theory – can be related by using a set of unifying dimensions. These dimensions are taken from the Cognitive approach to Coherence Relations and combined with more fine-grained additional features from the frameworks themselves to yield a posited set of dimensions that can successfully map three frameworks. The resulting interface will allow researchers to find identical or at least closely related relations within sets of annotated corpora, even if they are annotated within different frameworks. Furthermore, we tested our unified dimension (UniDim) approach by comparing PDTB and RST annotations of identical newspaper texts and converting their original end label annotations of relations into the accompanying values per dimension. Subsequently, rates of overlap in the attributed values per dimension were analyzed. Results indicate that the proposed dimensions indeed create an interface that makes existing annotation systems “talk to each other.”},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   B2

Hoek, Jet; Scholman, Merel

Evaluating discourse annotation: Some recent insights and new approaches Inproceedings

Proceedings of the 13th Joint ISO-ACL Workshop on Interoperable Semantic Annotation (ISA-13), 2017.

Annotated data is an important resource for the linguistics community, which is why researchers need to be sure that such data are reliable. However, arriving at sufficiently reliable annotations appears to be an issue within the field of discourse, possibly due to the fact that coherence is a mental phenomenon rather than a textual one. In this paper, we discuss recent insights and developments regarding annotation and reliability evaluation that are relevant to the field of discourse. We focus on characteristics of coherence that impact reliability scores and look at how different measures are affected by this. We discuss benefits and disadvantages of these measures, and propose that discourse annotation results be accompanied by a detailed report of the annotation process and data, as well as a careful consideration of the reliability measure that is applied.

@inproceedings{hoek2017evaluating,
title = {Evaluating discourse annotation: Some recent insights and new approaches},
author = {Jet Hoek and Merel Scholman},
url = {https://aclanthology.org/W17-7401},
year = {2017},
date = {2017},
booktitle = {Proceedings of the 13th Joint ISO-ACL Workshop on Interoperable Semantic Annotation (ISA-13)},
abstract = {Annotated data is an important resource for the linguistics community, which is why researchers need to be sure that such data are reliable. However, arriving at sufficiently reliable annotations appears to be an issue within the field of discourse, possibly due to the fact that coherence is a mental phenomenon rather than a textual one. In this paper, we discuss recent insights and developments regarding annotation and reliability evaluation that are relevant to the field of discourse. We focus on characteristics of coherence that impact reliability scores and look at how different measures are affected by this. We discuss benefits and disadvantages of these measures, and propose that discourse annotation results be accompanied by a detailed report of the annotation process and data, as well as a careful consideration of the reliability measure that is applied.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B2

Shi, Wei; Yung, Frances Pik Yu; Rubino, Raphael; Demberg, Vera

Using explicit discourse connectives in translation for implicit discourse relation classification Inproceedings

Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Asian Federation of Natural Language Processing, pp. 484-495, Taipei, Taiwan, 2017.

Implicit discourse relation recognition is an extremely challenging task due to the lack of indicative connectives. Various neural network architectures have been proposed for this task recently, but most of them suffer from the shortage of labeled data. In this paper, we address this problem by procuring additional training data from parallel corpora: When humans translate a text, they sometimes add connectives (a process known as explicitation). We automatically back-translate it into an English connective and use it to infer a label with high confidence. We show that a training set several times larger than the original training set can be generated this way. With the extra labeled instances, we show that even a simple bidirectional Long Short-Term Memory Network can outperform the current state-of-the-art.

@inproceedings{Shi2017b,
title = {Using explicit discourse connectives in translation for implicit discourse relation classification},
author = {Wei Shi and Frances Pik Yu Yung and Raphael Rubino and Vera Demberg},
url = {https://aclanthology.org/I17-1049},
year = {2017},
date = {2017},
booktitle = {Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)},
pages = {484-495},
publisher = {Asian Federation of Natural Language Processing},
address = {Taipei, Taiwan},
abstract = {Implicit discourse relation recognition is an extremely challenging task due to the lack of indicative connectives. Various neural network architectures have been proposed for this task recently, but most of them suffer from the shortage of labeled data. In this paper, we address this problem by procuring additional training data from parallel corpora: When humans translate a text, they sometimes add connectives (a process known as explicitation). We automatically back-translate it into an English connective and use it to infer a label with high confidence. We show that a training set several times larger than the original training set can be generated this way. With the extra labeled instances, we show that even a simple bidirectional Long Short-Term Memory Network can outperform the current state-of-the-art.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B2

Shi, Wei; Demberg, Vera

On the need of cross validation for discourse relation classification Inproceedings

Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, Association for Computational Linguistics, pp. 150-156, Valencia, Spain, 2017.

The task of implicit discourse relation classification has received increased attention in recent years, including two CoNNL shared tasks on the topic. Existing machine learning models for the task train on sections 2-21 of the PDTB and test on section 23, which includes a total of 761 implicit discourse relations. In this paper, we{‚}d like to make a methodological point, arguing that the standard test set is too small to draw conclusions about whether the inclusion of certain features constitute a genuine improvement, or whether one got lucky with some properties of the test set, and argue for the adoption of cross validation for the discourse relation classification task by the community.

@inproceedings{Shi2017,
title = {On the need of cross validation for discourse relation classification},
author = {Wei Shi and Vera Demberg},
url = {https://aclanthology.org/E17-2024},
year = {2017},
date = {2017},
booktitle = {Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers},
pages = {150-156},
publisher = {Association for Computational Linguistics},
address = {Valencia, Spain},
abstract = {The task of implicit discourse relation classification has received increased attention in recent years, including two CoNNL shared tasks on the topic. Existing machine learning models for the task train on sections 2-21 of the PDTB and test on section 23, which includes a total of 761 implicit discourse relations. In this paper, we{'}d like to make a methodological point, arguing that the standard test set is too small to draw conclusions about whether the inclusion of certain features constitute a genuine improvement, or whether one got lucky with some properties of the test set, and argue for the adoption of cross validation for the discourse relation classification task by the community.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B2

Scholman, Merel; Rohde, Hannah; Demberg, Vera

"On the one hand" as a cue to anticipate upcoming discourse structure Journal Article

Journal of Memory and Language, 97, pp. 47-60, 2017.

Research has shown that people anticipate upcoming linguistic content, but most work to date has focused on relatively short-range expectation-driven processes within the current sentence or between adjacent sentences. We use the discourse marker On the one hand to test whether comprehenders maintain expectations regarding upcoming content in discourse representations that span multiple sentences. Three experiments show that comprehenders anticipate more than just On the other hand; rather, they keep track of embedded constituents and establish non-local dependencies. Our results show that comprehenders disprefer a subsequent contrast marked with On the other hand when a passage has already provided intervening content that establishes an appropriate contrast with On the one hand. Furthermore, comprehenders maintain their expectation for an upcoming contrast across intervening material, even if the embedded constituent itself contains contrast. The results are taken to support expectation-driven models of processing in which comprehenders posit and maintain structural representations of discourse structure.

@article{Merel2017,
title = {"On the one hand" as a cue to anticipate upcoming discourse structure},
author = {Merel Scholman and Hannah Rohde and Vera Demberg},
url = {https://www.sciencedirect.com/science/article/pii/S0749596X17300566},
year = {2017},
date = {2017},
journal = {Journal of Memory and Language},
pages = {47-60},
volume = {97},
abstract = {

Research has shown that people anticipate upcoming linguistic content, but most work to date has focused on relatively short-range expectation-driven processes within the current sentence or between adjacent sentences. We use the discourse marker On the one hand to test whether comprehenders maintain expectations regarding upcoming content in discourse representations that span multiple sentences. Three experiments show that comprehenders anticipate more than just On the other hand; rather, they keep track of embedded constituents and establish non-local dependencies. Our results show that comprehenders disprefer a subsequent contrast marked with On the other hand when a passage has already provided intervening content that establishes an appropriate contrast with On the one hand. Furthermore, comprehenders maintain their expectation for an upcoming contrast across intervening material, even if the embedded constituent itself contains contrast. The results are taken to support expectation-driven models of processing in which comprehenders posit and maintain structural representations of discourse structure.

},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   B2

Scholman, Merel; Demberg, Vera

Examples and specifications that prove a point: Distinguishing between elaborative and argumentative discourse relations Journal Article

Dialogue and Discourse, 8, pp. 53-86, 2017.
Examples and specifications occur frequently in text, but not much is known about how how readers interpret them. Looking at how they’re annotated in existing discourse corpora, we find that anno-tators often disagree on these types of relations; specifically, there is disagreement about whether these relations are elaborative (additive) or argumentative (pragmatic causal). To investigate how readers interpret examples and specifications, we conducted a crowdsourced discourse annotation study. The results show that these relations can indeed have two functions: they can be used to both illustrate / specify a situation and serve as an argument for a claim. These findings suggest that examples and specifications can have multiple simultaneous readings. We discuss the implications of these results for discourse annotation.

@article{Scholman2017,
title = {Examples and specifications that prove a point: Distinguishing between elaborative and argumentative discourse relations},
author = {Merel Scholman and Vera Demberg},
url = {https://www.researchgate.net/publication/318569668_Examples_and_Specifications_that_Prove_a_Point_Identifying_Elaborative_and_Argumentative_Discourse_Relations},
year = {2017},
date = {2017},
journal = {Dialogue and Discourse},
pages = {53-86},
volume = {8},
number = {2},
abstract = {

Examples and specifications occur frequently in text, but not much is known about how how readers interpret them. Looking at how they're annotated in existing discourse corpora, we find that anno-tators often disagree on these types of relations; specifically, there is disagreement about whether these relations are elaborative (additive) or argumentative (pragmatic causal). To investigate how readers interpret examples and specifications, we conducted a crowdsourced discourse annotation study. The results show that these relations can indeed have two functions: they can be used to both illustrate / specify a situation and serve as an argument for a claim. These findings suggest that examples and specifications can have multiple simultaneous readings. We discuss the implications of these results for discourse annotation.
},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   B2

Scholman, Merel; Demberg, Vera

Crowdsourcing discourse interpretations: On the influence of context and the reliability of a connective insertion task Inproceedings

Proceedings of the 11th Linguistic Annotation Workshop, Association for Computational Linguistics, pp. 24-33, Valencia, Spain, 2017.

Traditional discourse annotation tasks are considered costly and time-consuming, and the reliability and validity of these tasks is in question. In this paper, we investigate whether crowdsourcing can be used to obtain reliable discourse relation annotations. We also examine the influence of context on the reliability of the data. The results of a crowdsourced connective insertion task showed that the method can be used to obtain reliable annotations: The majority of the inserted connectives converged with the original label. Further, the method is sensitive to the fact that multiple senses can often be inferred for a single relation. Regarding the presence of context, the results show no significant difference in distributions of insertions between conditions overall. However, a by-item comparison revealed several characteristics of segments that determine whether the presence of context makes a difference in annotations. The findings discussed in this paper can be taken as evidence that crowdsourcing can be used as a valuable method to obtain insights into the sense(s) of relations.

@inproceedings{Scholman2017,
title = {Crowdsourcing discourse interpretations: On the influence of context and the reliability of a connective insertion task},
author = {Merel Scholman and Vera Demberg},
url = {https://aclanthology.org/W17-0803},
doi = {https://doi.org/10.18653/v1/W17-0803},
year = {2017},
date = {2017},
booktitle = {Proceedings of the 11th Linguistic Annotation Workshop},
pages = {24-33},
publisher = {Association for Computational Linguistics},
address = {Valencia, Spain},
abstract = {Traditional discourse annotation tasks are considered costly and time-consuming, and the reliability and validity of these tasks is in question. In this paper, we investigate whether crowdsourcing can be used to obtain reliable discourse relation annotations. We also examine the influence of context on the reliability of the data. The results of a crowdsourced connective insertion task showed that the method can be used to obtain reliable annotations: The majority of the inserted connectives converged with the original label. Further, the method is sensitive to the fact that multiple senses can often be inferred for a single relation. Regarding the presence of context, the results show no significant difference in distributions of insertions between conditions overall. However, a by-item comparison revealed several characteristics of segments that determine whether the presence of context makes a difference in annotations. The findings discussed in this paper can be taken as evidence that crowdsourcing can be used as a valuable method to obtain insights into the sense(s) of relations.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B2

Rutherford, Attapol; Demberg, Vera; Xue, Nianwen

A systematic study of neural discourse models for implicit discourse relation Inproceedings

Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long PapersDialogue and Discourse, Association for Computational Linguistics, pp. 281-291, Valencia, Spain, 2017.

Inferring implicit discourse relations in natural language text is the most difficult subtask in discourse parsing. Many neural network models have been proposed to tackle this problem. However, the comparison for this task is not unified, so we could hardly draw clear conclusions about the effectiveness of various architectures. Here, we propose neural network models that are based on feedforward and long-short term memory architecture and systematically study the effects of varying structures. To our surprise, the best-configured feedforward architecture outperforms LSTM-based model in most cases despite thorough tuning. Further, we compare our best feedforward system with competitive convolutional and recurrent networks and find that feedforward can actually be more effective. For the first time for this task, we compile and publish outputs from previous neural and non-neural systems to establish the standard for further comparison.

@inproceedings{Rutherford2017,
title = {A systematic study of neural discourse models for implicit discourse relation},
author = {Attapol Rutherford and Vera Demberg and Nianwen Xue},
url = {https://aclanthology.org/E17-1027},
year = {2017},
date = {2017},
booktitle = {Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers},
pages = {281-291},
publisher = {Association for Computational Linguistics},
address = {Valencia, Spain},
abstract = {Inferring implicit discourse relations in natural language text is the most difficult subtask in discourse parsing. Many neural network models have been proposed to tackle this problem. However, the comparison for this task is not unified, so we could hardly draw clear conclusions about the effectiveness of various architectures. Here, we propose neural network models that are based on feedforward and long-short term memory architecture and systematically study the effects of varying structures. To our surprise, the best-configured feedforward architecture outperforms LSTM-based model in most cases despite thorough tuning. Further, we compare our best feedforward system with competitive convolutional and recurrent networks and find that feedforward can actually be more effective. For the first time for this task, we compile and publish outputs from previous neural and non-neural systems to establish the standard for further comparison.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B2

Evers-Vermeul, Jacqueline; Hoek, Jet; Scholman, Merel

On temporality in discourse annotation: Theoretical and practical considerations Journal Article

Dialogue and Discourse, 8, pp. 1-20, 2017.
Temporal information is one of the prominent features that determine the coherence in a discourse. That is why we need an adequate way to deal with this type of information during discourse annotation. In this paper, we will argue that temporal order is a relational rather than a segment-specific property, and that it is a cognitively plausible notion: temporal order is expressed in the system of linguistic markers and is relevant in both acquisition and language processing. This means that temporal relations meet the requirements set by the Cognitive approach of Coherence Relations (CCR) to be considered coherence relations, and that CCR would need a way to distinguish temporal relations within its annotation system. We will present merits and drawbacks of different options of reaching this objective and argue in favor of adding temporal order as a new dimension to CCR.

@article{Vermeul2017,
title = {On temporality in discourse annotation: Theoretical and practical considerations},
author = {Jacqueline Evers-Vermeul and Jet Hoek and Merel Scholman},
url = {https://journals.uic.edu/ojs/index.php/dad/article/view/10777},
doi = {https://doi.org/10.5087/dad.2017.201},
year = {2017},
date = {2017},
journal = {Dialogue and Discourse},
pages = {1-20},
volume = {8},
number = {2},
abstract = {

Temporal information is one of the prominent features that determine the coherence in a discourse. That is why we need an adequate way to deal with this type of information during discourse annotation. In this paper, we will argue that temporal order is a relational rather than a segment-specific property, and that it is a cognitively plausible notion: temporal order is expressed in the system of linguistic markers and is relevant in both acquisition and language processing. This means that temporal relations meet the requirements set by the Cognitive approach of Coherence Relations (CCR) to be considered coherence relations, and that CCR would need a way to distinguish temporal relations within its annotation system. We will present merits and drawbacks of different options of reaching this objective and argue in favor of adding temporal order as a new dimension to CCR.
},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   B2

Sayeed, Asad; Greenberg, Clayton; Demberg, Vera

Thematic fit evaluation: an aspect of selectional preferences Journal Article

Proceedings of the 1st Workshop on Evaluating Vector Space Representations for NLP, pp. 99-105, 2016, ISBN 9781945626142.

In this paper, we discuss the human thematic fit judgement correlation task in the context of real-valued vector space word representations. Thematic fit is the extent to which an argument fulfils the selectional preference of a verb given a role: for example, how well “cake” fulfils the patient role of “cut”. In recent work, systems have been evaluated on this task by finding the correlations of their output judgements with human-collected judgement data. This task is a representationindependent way of evaluating models that can be applied whenever a system score can be generated, and it is applicable wherever predicate-argument relations are significant to performance in end-user tasks. Significant progress has been made on this cognitive modeling task, leaving considerable space for future, more comprehensive types of evaluation.

@article{Sayeed2016,
title = {Thematic fit evaluation: an aspect of selectional preferences},
author = {Asad Sayeed and Clayton Greenberg and Vera Demberg},
url = {https://www.researchgate.net/publication/306094219_Thematic_fit_evaluation_an_aspect_of_selectional_preferences},
year = {2016},
date = {2016},
journal = {Proceedings of the 1st Workshop on Evaluating Vector Space Representations for NLP},
pages = {99-105},
abstract = {In this paper, we discuss the human thematic fit judgement correlation task in the context of real-valued vector space word representations. Thematic fit is the extent to which an argument fulfils the selectional preference of a verb given a role: for example, how well “cake” fulfils the patient role of “cut”. In recent work, systems have been evaluated on this task by finding the correlations of their output judgements with human-collected judgement data. This task is a representationindependent way of evaluating models that can be applied whenever a system score can be generated, and it is applicable wherever predicate-argument relations are significant to performance in end-user tasks. Significant progress has been made on this cognitive modeling task, leaving considerable space for future, more comprehensive types of evaluation.},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Projects:   B2 B4

Rutherford, Attapol; Demberg, Vera; Xue, Nianwen

Neural Network Models for Implicit Discourse Relation Classification in English and Chinese without Surface Features Journal Article

CoRR, 2016.

Inferring implicit discourse relations in natural language text is the most difficult subtask in discourse parsing. Surface features achieve good performance, but they are not readily applicable to other languages without semantic lexicons. Previous neural models require parses, surface features, or a small label set to work well. Here, we propose neural network models that are based on feedforward and long-short term memory architecture without any surface features. To our surprise, our best configured feedforward architecture outperforms LSTM-based model in most cases despite thorough tuning. Under various fine-grained label sets and a cross-linguistic setting, our feedforward models perform consistently better or at least just as well as systems that require hand-crafted surface features. Our models present the first neural Chinese discourse parser in the style of Chinese Discourse Treebank, showing that our results hold cross-linguistically.

@article{DBLP:journals/corr/RutherfordDX16,
title = {Neural Network Models for Implicit Discourse Relation Classification in English and Chinese without Surface Features},
author = {Attapol Rutherford and Vera Demberg and Nianwen Xue},
url = {http://arxiv.org/abs/1606.01990},
year = {2016},
date = {2016},
journal = {CoRR},
abstract = {Inferring implicit discourse relations in natural language text is the most difficult subtask in discourse parsing. Surface features achieve good performance, but they are not readily applicable to other languages without semantic lexicons. Previous neural models require parses, surface features, or a small label set to work well. Here, we propose neural network models that are based on feedforward and long-short term memory architecture without any surface features. To our surprise, our best configured feedforward architecture outperforms LSTM-based model in most cases despite thorough tuning. Under various fine-grained label sets and a cross-linguistic setting, our feedforward models perform consistently better or at least just as well as systems that require hand-crafted surface features. Our models present the first neural Chinese discourse parser in the style of Chinese Discourse Treebank, showing that our results hold cross-linguistically.},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   B2

Torabi Asr, Fatemeh; Demberg, Vera

But vs. Although under the microscope Inproceedings

Proceedings of the 38th Meeting of the Cognitive Science Society, pp. 366-371, Philadelphia, Pennsylvania, USA, 2016.

Previous experimental studies on concessive connectives have only looked at their local facilitating or predictive effect on discourse relation comprehension and have often viewed them as a class of discourse markers with similar effects. We look into the effect of two connectives, but and although, for inferring contrastive vs. concessive discourse relations to complement previous experimental work on causal inferences. An offline survey on AMTurk and an online eye-tracking-while-reading experiment are conducted to show that even between these two connectives, which mark the same set of relations, interpretations are biased. The bias is consistent with the distribution of the connective across discourse relations. This suggests that an account of discourse connective meaning based on probability distributions can better account for comprehension data than a classic categorical approach, or an approach where closely related connectives only have a core meaning and the rest of the interpretation comes from the discourse arguments.

@inproceedings{Asr2016b,
title = {But vs. Although under the microscope},
author = {Fatemeh Torabi Asr and Vera Demberg},
url = {https://www.semanticscholar.org/paper/But-vs.-Although-under-the-microscope-Asr-Demberg/68be3f7ec0d7642f4371d991fc15471416141dfd},
year = {2016},
date = {2016},
booktitle = {Proceedings of the 38th Meeting of the Cognitive Science Society},
pages = {366-371},
address = {Philadelphia, Pennsylvania, USA},
abstract = {Previous experimental studies on concessive connectives have only looked at their local facilitating or predictive effect on discourse relation comprehension and have often viewed them as a class of discourse markers with similar effects. We look into the effect of two connectives, but and although, for inferring contrastive vs. concessive discourse relations to complement previous experimental work on causal inferences. An offline survey on AMTurk and an online eye-tracking-while-reading experiment are conducted to show that even between these two connectives, which mark the same set of relations, interpretations are biased. The bias is consistent with the distribution of the connective across discourse relations. This suggests that an account of discourse connective meaning based on probability distributions can better account for comprehension data than a classic categorical approach, or an approach where closely related connectives only have a core meaning and the rest of the interpretation comes from the discourse arguments.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B2

Rehbein, Ines; Scholman, Merel; Demberg, Vera

Annotating Discourse Relations in Spoken Language: A Comparison of the PDTB and CCR Frameworks Inproceedings

Calzolari, Nicoletta; Choukri, Khalid; Declerck, Thierry; Grobelnik, Marko; Maegaard, Bente; Mariani, Joseph; Moreno, Asuncion; Odijk, Jan; Piperidis, Stelios (Ed.): Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), European Language Resources Association (ELRA), pp. 1039-1046, Portorož, Slovenia, 2016, ISBN 978-2-9517408-9-1.

In discourse relation annotation, there is currently a variety of different frameworks being used, and most of them have been developed and employed mostly on written data. This raises a number of questions regarding interoperability of discourse relation annotation schemes, as well as regarding differences in discourse annotation for written vs. spoken domains. In this paper, we describe ouron annotating two spoken domains from the SPICE Ireland corpus (telephone conversations and broadcast interviews) according todifferent discourse annotation schemes, PDTB 3.0 and CCR. We show that annotations in the two schemes can largely be mappedone another, and discuss differences in operationalisations of discourse relation schemes which present a challenge to automatic mapping. We also observe systematic differences in the prevalence of implicit discourse relations in spoken data compared to written texts,find that there are also differences in the types of causal relations between the domains. Finally, we find that PDTB 3.0 addresses many shortcomings of PDTB 2.0 wrt. the annotation of spoken discourse, and suggest further extensions. The new corpus has roughly theof the CoNLL 2015 Shared Task test set, and we hence hope that it will be a valuable resource for the evaluation of automatic discourse relation labellers.

@inproceedings{REHBEIN16.457,
title = {Annotating Discourse Relations in Spoken Language: A Comparison of the PDTB and CCR Frameworks},
author = {Ines Rehbein and Merel Scholman and Vera Demberg},
editor = {Nicoletta Calzolari and Khalid Choukri and Thierry Declerck and Marko Grobelnik and Bente Maegaard and Joseph Mariani and Asuncion Moreno and Jan Odijk and Stelios Piperidis},
url = {https://aclanthology.org/L16-1165},
year = {2016},
date = {2016},
booktitle = {Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)},
isbn = {978-2-9517408-9-1},
pages = {1039-1046},
publisher = {European Language Resources Association (ELRA)},
address = {Portoro{\v{z}, Slovenia},
abstract = {In discourse relation annotation, there is currently a variety of different frameworks being used, and most of them have been developed and employed mostly on written data. This raises a number of questions regarding interoperability of discourse relation annotation schemes, as well as regarding differences in discourse annotation for written vs. spoken domains. In this paper, we describe ouron annotating two spoken domains from the SPICE Ireland corpus (telephone conversations and broadcast interviews) according todifferent discourse annotation schemes, PDTB 3.0 and CCR. We show that annotations in the two schemes can largely be mappedone another, and discuss differences in operationalisations of discourse relation schemes which present a challenge to automatic mapping. We also observe systematic differences in the prevalence of implicit discourse relations in spoken data compared to written texts,find that there are also differences in the types of causal relations between the domains. Finally, we find that PDTB 3.0 addresses many shortcomings of PDTB 2.0 wrt. the annotation of spoken discourse, and suggest further extensions. The new corpus has roughly theof the CoNLL 2015 Shared Task test set, and we hence hope that it will be a valuable resource for the evaluation of automatic discourse relation labellers.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B2

Rehbein, Ines; Scholman, Merel; Demberg, Vera

Disco-SPICE (Spoken conversations from the SPICE-Ireland corpus annotated with discourse relations) Inproceedings

Annotating discourse relations in spoken language: A comparison of the PDTB and CCR frameworks. Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 16), Portorož, Slovenia, 2016.

The resource contains all texts from the Broadcast interview and Telephone conversation genres from the SPICE-Ireland corpus, annotated with discourse relations according to the PDTB 3.0 and CCR frameworks. Contact person: Merel Scholman

@inproceedings{merel2016,
title = {Disco-SPICE (Spoken conversations from the SPICE-Ireland corpus annotated with discourse relations)},
author = {Ines Rehbein and Merel Scholman and Vera Demberg},
year = {2016},
date = {2016},
booktitle = {Annotating discourse relations in spoken language: A comparison of the PDTB and CCR frameworks. Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 16)},
address = {Portoro{\v{z}, Slovenia},
abstract = {The resource contains all texts from the Broadcast interview and Telephone conversation genres from the SPICE-Ireland corpus, annotated with discourse relations according to the PDTB 3.0 and CCR frameworks. Contact person: Merel Scholman},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B2

Demberg, Vera; Sayeed, Asad

The Frequency of Rapid Pupil Dilations as a Measure of Linguistic Processing Difficulty Journal Article

Andreas Stamatakis, Emmanuel (Ed.): PLOS ONE, 11, 2016.

While it has long been known that the pupil reacts to cognitive load, pupil size has received little attention in cognitive research because of its long latency and the difficulty of separating effects of cognitive load from the light reflex or effects due to eye movements. A novel measure, the Index of Cognitive Activity (ICA), relates cognitive effort to the frequency of small rapid dilations of the pupil. We report here on a total of seven experiments which test whether the ICA reliably indexes linguistically induced cognitive load: three experiments in reading (a manipulation of grammatical gender match / mismatch, an experiment of semantic fit, and an experiment comparing locally ambiguous subject versus object relative clauses, all in German), three dual-task experiments with simultaneous driving and spoken language comprehension (using the same manipulations as in the single-task reading experiments), and a visual world experiment comparing the processing of causal versus concessive discourse markers. These experiments are the first to investigate the effect and time course of the ICA in language processing. All of our experiments support the idea that the ICA indexes linguistic processing difficulty. The effects of our linguistic manipulations on the ICA are consistent for reading and auditory presentation. Furthermore, our experiments show that the ICA allows for usage within a multi-task paradigm. Its robustness with respect to eye movements means that it is a valid measure of processing difficulty for usage within the visual world paradigm, which will allow researchers to assess both visual attention and processing difficulty at the same time, using an eye-tracker. We argue that the ICA is indicative of activity in the locus caeruleus area of the brain stem, which has recently also been linked to P600 effects observed in psycholinguistic EEG experiments.

@article{demberg:sayeed:2016:plosone,
title = {The Frequency of Rapid Pupil Dilations as a Measure of Linguistic Processing Difficulty},
author = {Vera Demberg and Asad Sayeed},
editor = {Emmanuel Andreas Stamatakis},
url = {http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4723154/},
doi = {https://doi.org/10.1371/journal.pone.0146194},
year = {2016},
date = {2016},
journal = {PLOS ONE},
volume = {11},
number = {1},
abstract = {

While it has long been known that the pupil reacts to cognitive load, pupil size has received little attention in cognitive research because of its long latency and the difficulty of separating effects of cognitive load from the light reflex or effects due to eye movements. A novel measure, the Index of Cognitive Activity (ICA), relates cognitive effort to the frequency of small rapid dilations of the pupil. We report here on a total of seven experiments which test whether the ICA reliably indexes linguistically induced cognitive load: three experiments in reading (a manipulation of grammatical gender match / mismatch, an experiment of semantic fit, and an experiment comparing locally ambiguous subject versus object relative clauses, all in German), three dual-task experiments with simultaneous driving and spoken language comprehension (using the same manipulations as in the single-task reading experiments), and a visual world experiment comparing the processing of causal versus concessive discourse markers. These experiments are the first to investigate the effect and time course of the ICA in language processing. All of our experiments support the idea that the ICA indexes linguistic processing difficulty. The effects of our linguistic manipulations on the ICA are consistent for reading and auditory presentation. Furthermore, our experiments show that the ICA allows for usage within a multi-task paradigm. Its robustness with respect to eye movements means that it is a valid measure of processing difficulty for usage within the visual world paradigm, which will allow researchers to assess both visual attention and processing difficulty at the same time, using an eye-tracker. We argue that the ICA is indicative of activity in the locus caeruleus area of the brain stem, which has recently also been linked to P600 effects observed in psycholinguistic EEG experiments.

},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   B2

Sayeed, Asad; Hong, Xudong; Demberg, Vera

Roleo: Visualising Thematic Fit Spaces on the Web Inproceedings

Proceedings of ACL-2016 System Demonstrations, Association for Computational Linguistics, pp. 139-144, Berlin, Germany, 2016.

In this paper, we present Roleo, a web tool for visualizing the vector spaces generated by the evaluation of distributional memory (DM) models over thematic fit judgements. A thematic fit judgement is a rating of the selectional preference of a verb for an argument that fills a given thematic role. The DM approach to thematic fit judgements involves the construction of a sub-space in which a prototypical role-filler can be built for comparison to the noun being judged. We describe a publicly-accessible web tool that allows for querying and exploring these spaces as well as a technique for visualizing thematic fit sub-spaces efficiently for web use.

@inproceedings{sayeed-hong-demberg:2016:P16-4,
title = {Roleo: Visualising Thematic Fit Spaces on the Web},
author = {Asad Sayeed and Xudong Hong and Vera Demberg},
url = {https://www.researchgate.net/publication/306093691_Roleo_Visualising_Thematic_Fit_Spaces_on_the_Web},
year = {2016},
date = {2016-08-01},
booktitle = {Proceedings of ACL-2016 System Demonstrations},
pages = {139-144},
publisher = {Association for Computational Linguistics},
address = {Berlin, Germany},
abstract = {In this paper, we present Roleo, a web tool for visualizing the vector spaces generated by the evaluation of distributional memory (DM) models over thematic fit judgements. A thematic fit judgement is a rating of the selectional preference of a verb for an argument that fills a given thematic role. The DM approach to thematic fit judgements involves the construction of a sub-space in which a prototypical role-filler can be built for comparison to the noun being judged. We describe a publicly-accessible web tool that allows for querying and exploring these spaces as well as a technique for visualizing thematic fit sub-spaces efficiently for web use.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B2

Pusse, Florian; Sayeed, Asad; Demberg, Vera

LingoTurk: managing crowdsourced tasks for psycholinguistics Inproceedings

Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, Association for Computational Linguistics, pp. 57-61, San Diego, California, 2016.

LingoTurk is an open-source, freely available crowdsourcing client/server system aimed primarily at psycholinguistic experimentation where custom and specialized user interfaces are required but not supported by popular crowdsourcing task management platforms. LingoTurk enables user-friendly local hosting of experiments as well as condition management and participant exclusion. It is compatible with Amazon Mechanical Turk and Prolific Academic. New experiments can easily be set up via the Play Framework and the LingoTurk API, while multiple experiments can be managed from a single system.

@inproceedings{pusse-sayeed-demberg:2016:N16-3,
title = {LingoTurk: managing crowdsourced tasks for psycholinguistics},
author = {Florian Pusse and Asad Sayeed and Vera Demberg},
url = {http://www.aclweb.org/anthology/N16-3012},
year = {2016},
date = {2016-06-01},
booktitle = {Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations},
pages = {57-61},
publisher = {Association for Computational Linguistics},
address = {San Diego, California},
abstract = {LingoTurk is an open-source, freely available crowdsourcing client/server system aimed primarily at psycholinguistic experimentation where custom and specialized user interfaces are required but not supported by popular crowdsourcing task management platforms. LingoTurk enables user-friendly local hosting of experiments as well as condition management and participant exclusion. It is compatible with Amazon Mechanical Turk and Prolific Academic. New experiments can easily be set up via the Play Framework and the LingoTurk API, while multiple experiments can be managed from a single system.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B2

Successfully