Publications

Roth, Michael; Thater, Stefan; Ostermann, Simon; Modi, Ashutosh; Pinkal, Manfred

MCScript: A Novel Dataset for Assessing Machine Comprehension Using Script Knowledge Inproceedings

Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC 2018), European Language Resources Association (ELRA), Miyazaki, Japan, 2018.

We introduce a large dataset of narrative texts and questions about these texts, intended to be used in a machine comprehension task that requires reasoning using commonsense knowledge. Our dataset complements similar datasets in that we focus on stories about everyday activities, such as going to the movies or working in the garden, and that the questions require commonsense knowledge, or more specifically, script knowledge, to be answered. We show that our mode of data collection via crowdsourcing results in a substantial amount of such inference questions. The dataset forms the basis of a shared task on commonsense and script knowledge organized at SemEval 2018 and provides challenging test cases for the broader natural language understanding community

@inproceedings{MCScript,
title = {MCScript: A Novel Dataset for Assessing Machine Comprehension Using Script Knowledge},
author = {Michael Roth and Stefan Thater andSimon Ostermann and Ashutosh Modi and Manfred Pinkal},
url = {https://aclanthology.org/L18-1564"},
year = {2018},
date = {2018},
booktitle = {Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC 2018)},
publisher = {European Language Resources Association (ELRA)},
address = {Miyazaki, Japan},
abstract = {We introduce a large dataset of narrative texts and questions about these texts, intended to be used in a machine comprehension task that requires reasoning using commonsense knowledge. Our dataset complements similar datasets in that we focus on stories about everyday activities, such as going to the movies or working in the garden, and that the questions require commonsense knowledge, or more specifically, script knowledge, to be answered. We show that our mode of data collection via crowdsourcing results in a substantial amount of such inference questions. The dataset forms the basis of a shared task on commonsense and script knowledge organized at SemEval 2018 and provides challenging test cases for the broader natural language understanding community},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A3

Roth, Michael; Thater, Stefan; Ostermann, Simon; Modi, Ashutosh; Pinkal, Manfred

SemEval-2018 Task 11: Machine Comprehension using Commonsense Knowledge Inproceedings

Proceedings of the 12th International Workshop on Semantic Evaluation, Association for Computational Linguistics, pp. 747-757, New Orleans, Louisiana, 2018.

This report summarizes the results of the SemEval 2018 task on machine comprehension using commonsense knowledge. For this machine comprehension task, we created a new corpus, MCScript. It contains a high number of questions that require commonsense knowledge for finding the correct answer. 11 teams from 4 different countries participated in this shared task, most of them used neural approaches. The best performing system achieves an accuracy of 83.95%, outperforming the baselines by a large margin, but still far from the human upper bound, which was found to be at 98%.

@inproceedings{SemEval2018Task11,
title = {SemEval-2018 Task 11: Machine Comprehension using Commonsense Knowledge},
author = {Michael Roth and Stefan Thater andSimon Ostermann and Ashutosh Modi and Manfred Pinkal},
url = {https://aclanthology.org/S18-1119},
doi = {https://doi.org/10.18653/v1/S18-1119},
year = {2018},
date = {2018},
booktitle = {Proceedings of the 12th International Workshop on Semantic Evaluation},
pages = {747-757},
publisher = {Association for Computational Linguistics},
address = {New Orleans, Louisiana},
abstract = {This report summarizes the results of the SemEval 2018 task on machine comprehension using commonsense knowledge. For this machine comprehension task, we created a new corpus, MCScript. It contains a high number of questions that require commonsense knowledge for finding the correct answer. 11 teams from 4 different countries participated in this shared task, most of them used neural approaches. The best performing system achieves an accuracy of 83.95%, outperforming the baselines by a large margin, but still far from the human upper bound, which was found to be at 98%.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A3

Roth, Michael; Thater, Stefan; Ostermann, Simon; Pinkal, Manfred

Aligning Script Events with Narrative Texts Inproceedings

Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (*SEM 2017), Association for Computational Linguistics, Vancouver, Canada, 2017.

Script knowledge plays a central role in text understanding and is relevant for a variety of downstream tasks. In this paper, we consider two recent datasets which provide a rich and general representation of script events in terms of paraphrase sets.

We introduce the task of mapping event mentions in narrative texts to such script event types, and present a model for this task that exploits rich linguistic representations as well as information on temporal ordering. The results of our experiments demonstrate that this complex task is indeed feasible.

@inproceedings{ostermann-EtAl:2017:starSEM,
title = {Aligning Script Events with Narrative Texts},
author = {Michael Roth and Stefan Thater andSimon Ostermann and Manfred Pinkal},
url = {http://www.aclweb.org/anthology/S17-1016},
year = {2017},
date = {2017-10-17},
booktitle = {Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (*SEM 2017)},
publisher = {Association for Computational Linguistics},
address = {Vancouver, Canada},
abstract = {Script knowledge plays a central role in text understanding and is relevant for a variety of downstream tasks. In this paper, we consider two recent datasets which provide a rich and general representation of script events in terms of paraphrase sets. We introduce the task of mapping event mentions in narrative texts to such script event types, and present a model for this task that exploits rich linguistic representations as well as information on temporal ordering. The results of our experiments demonstrate that this complex task is indeed feasible.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A3

Nguyen, Dai Quoc; Nguyen, Dat Quoc; Modi, Ashutosh; Thater, Stefan; Pinkal, Manfred

A Mixture Model for Learning Multi-Sense Word Embeddings Inproceedings

Association for Computational Linguistics, pp. 121-127, Vancouver, Canada, 2017.

Word embeddings are now a standard technique for inducing meaning representations for words. For getting good representations, it is important to take into account different senses of a word. In this paper, we propose a mixture model for learning multi-sense word embeddings.

Our model generalizes the previous works in that it allows to induce different weights of different senses of a word. The experimental results show that our model outperforms previous models on standard evaluation tasks.

@inproceedings{nguyen-EtAl:2017:starSEM,
title = {A Mixture Model for Learning Multi-Sense Word Embeddings},
author = {Dai Quoc Nguyen and Dat Quoc Nguyen and Ashutosh Modi and Stefan Thater and Manfred Pinkal},
url = {http://www.aclweb.org/anthology/S17-1015},
year = {2017},
date = {2017},
pages = {121-127},
publisher = {Association for Computational Linguistics},
address = {Vancouver, Canada},
abstract = {Word embeddings are now a standard technique for inducing meaning representations for words. For getting good representations, it is important to take into account different senses of a word. In this paper, we propose a mixture model for learning multi-sense word embeddings. Our model generalizes the previous works in that it allows to induce different weights of different senses of a word. The experimental results show that our model outperforms previous models on standard evaluation tasks.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Projects:   A2 A3

Nguyen, Dai Quoc; Nguyen, Dat Quoc; Chu, Cuong Xuan; Thater, Stefan; Pinkal, Manfred

Sequence to Sequence Learning for Event Prediction Inproceedings

Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), Asian Federation of Natural Language Processing, pp. 37-42, Taipei, Taiwan, 2017.

This paper presents an approach to the task of predicting an event description from a preceding sentence in a text. Our approach explores sequence-to-sequence learning using a bidirectional multi-layer recurrent neural network. Our approach substantially outperforms previous work in terms of the BLEU score on two datasets derived from WikiHow and DeScript respectively.

Since the BLEU score is not easy to interpret as a measure of event prediction, we complement our study with a second evaluation that exploits the rich linguistic annotation of gold paraphrase sets of events.

@inproceedings{nguyen-EtAl:2017:I17-2,
title = {Sequence to Sequence Learning for Event Prediction},
author = {Dai Quoc Nguyen and Dat Quoc Nguyen and Cuong Xuan Chu and Stefan Thater and Manfred Pinkal},
url = {http://www.aclweb.org/anthology/I17-2007},
year = {2017},
date = {2017-10-17},
booktitle = {Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)},
pages = {37-42},
publisher = {Asian Federation of Natural Language Processing},
address = {Taipei, Taiwan},
abstract = {This paper presents an approach to the task of predicting an event description from a preceding sentence in a text. Our approach explores sequence-to-sequence learning using a bidirectional multi-layer recurrent neural network. Our approach substantially outperforms previous work in terms of the BLEU score on two datasets derived from WikiHow and DeScript respectively. Since the BLEU score is not easy to interpret as a measure of event prediction, we complement our study with a second evaluation that exploits the rich linguistic annotation of gold paraphrase sets of events.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Projects:   A3 A2

Zarcone, Alessandra; van Schijndel, Marten; Vogels, Jorrig; Demberg, Vera

Salience and attention in surprisal-based accounts of language processing Journal Article

Frontiers in Psychology, 7, 2016, ISSN 1664-1078.

The notion of salience has been singled out as the explanatory factor for a diverse range of linguistic phenomena. In particular, perceptual salience (e.g., visual salience of objects in the world, acoustic prominence of linguistic sounds) and semantic-pragmatic salience (e.g., prominence of recently mentioned or topical referents) have been shown to influence language comprehension and production. A different line of research has sought to account for behavioral correlates of cognitive load during comprehension as well as for certain patterns in language usage using information-theoretic notions, such as surprisal. Surprisal and salience both affect language processing at different levels, but the relationship between the two has not been adequately elucidated, and the question of whether salience can be reduced to surprisal / predictability is still open. Our review identifies two main challenges in addressing this question: terminological inconsistency and lack of integration between high and low levels of representations in salience-based accounts and surprisal-based accounts. We capitalize upon work in visual cognition in order to orient ourselves in surveying the different facets of the notion of salience in linguistics and their relation with models of surprisal. We find that work on salience highlights aspects of linguistic communication that models of surprisal tend to overlook, namely the role of attention and relevance to current goals, and we argue that the Predictive Coding framework provides a unified view which can account for the role played by attention and predictability at different levels of processing and which can clarify the interplay between low and high levels of processes and between predictability-driven expectation and attention-driven focus.

@article{Zarcone2016,
title = {Salience and attention in surprisal-based accounts of language processing},
author = {Alessandra Zarcone and Marten van Schijndel and Jorrig Vogels and Vera Demberg},
url = {http://www.frontiersin.org/language_sciences/10.3389/fpsyg.2016.00844/abstract},
doi = {https://doi.org/10.3389/fpsyg.2016.00844},
year = {2016},
date = {2016},
journal = {Frontiers in Psychology},
volume = {7},
number = {844},
abstract = {

The notion of salience has been singled out as the explanatory factor for a diverse range of linguistic phenomena. In particular, perceptual salience (e.g., visual salience of objects in the world, acoustic prominence of linguistic sounds) and semantic-pragmatic salience (e.g., prominence of recently mentioned or topical referents) have been shown to influence language comprehension and production. A different line of research has sought to account for behavioral correlates of cognitive load during comprehension as well as for certain patterns in language usage using information-theoretic notions, such as surprisal. Surprisal and salience both affect language processing at different levels, but the relationship between the two has not been adequately elucidated, and the question of whether salience can be reduced to surprisal / predictability is still open. Our review identifies two main challenges in addressing this question: terminological inconsistency and lack of integration between high and low levels of representations in salience-based accounts and surprisal-based accounts. We capitalize upon work in visual cognition in order to orient ourselves in surveying the different facets of the notion of salience in linguistics and their relation with models of surprisal. We find that work on salience highlights aspects of linguistic communication that models of surprisal tend to overlook, namely the role of attention and relevance to current goals, and we argue that the Predictive Coding framework provides a unified view which can account for the role played by attention and predictability at different levels of processing and which can clarify the interplay between low and high levels of processes and between predictability-driven expectation and attention-driven focus.
},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Projects:   A3 A4

Tilk, Ottokar; Demberg, Vera; Sayeed, Asad; Klakow, Dietrich; Thater, Stefan

Event participation modelling with neural networks Inproceedings

Conference on Empirical Methods in Natural Language Processing, Austin, Texas, USA, 2016.

A common problem in cognitive modelling is lack of access to accurate broad-coverage models of event-level surprisal. As shown in,
e.g., Bicknell et al. (2010), event-level knowledge does affect human expectations for verbal arguments. For example, the model should be able to predict that mechanics are likely to check tires, while journalists are more likely to check typos. Similarly, we would like to predict what locations are likely for playing football or playing flute in order to estimate the surprisal of actually-encountered locations. Furthermore, such a model can be used to provide a probability distribution over fillers for a thematic role which is not mentioned in the text at all.

To this end, we train two neural network models (an incremental one and a non-incremental one) on large amounts of automatically rolelabelled text. Our models are probabilistic and can handle several roles at once, which also enables them to learn interactions between different role fillers. Evaluation shows a drastic improvement over current state-of-the-art systems on modelling human thematic fit judgements, and we demonstrate via a sentence similarity task that the system learns highly useful embeddings.

@inproceedings{Tilk2016,
title = {Event participation modelling with neural networks},
author = {Ottokar Tilk and Vera Demberg and Asad Sayeed and Dietrich Klakow and Stefan Thater},
url = {https://www.semanticscholar.org/paper/Event-participant-modelling-with-neural-networks-Tilk-Demberg/d08d663d7795c76bb008f539b1ac7caf8a9ef26c},
year = {2016},
date = {2016},
publisher = {Conference on Empirical Methods in Natural Language Processing},
address = {Austin, Texas, USA},
abstract = {A common problem in cognitive modelling is lack of access to accurate broad-coverage models of event-level surprisal. As shown in, e.g., Bicknell et al. (2010), event-level knowledge does affect human expectations for verbal arguments. For example, the model should be able to predict that mechanics are likely to check tires, while journalists are more likely to check typos. Similarly, we would like to predict what locations are likely for playing football or playing flute in order to estimate the surprisal of actually-encountered locations. Furthermore, such a model can be used to provide a probability distribution over fillers for a thematic role which is not mentioned in the text at all. To this end, we train two neural network models (an incremental one and a non-incremental one) on large amounts of automatically rolelabelled text. Our models are probabilistic and can handle several roles at once, which also enables them to learn interactions between different role fillers. Evaluation shows a drastic improvement over current state-of-the-art systems on modelling human thematic fit judgements, and we demonstrate via a sentence similarity task that the system learns highly useful embeddings.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A3

Modi, Ashutosh; Titov, Ivan; Demberg, Vera; Sayeed, Asad; Pinkal, Manfred

Modeling Semantic Expectations: Using Script Knowledge for Referent Prediction Journal Article

Transactions of the Association for Computational Linguistics, MIT Press, pp. 31-44, Cambridge, MA, 2016.

Recent research in psycholinguistics has provided increasing evidence that humans predict upcoming content. Prediction also affects perception and might be a key to robustness in human language processing. In this paper, we investigate the factors that affect human prediction by building a computational model that can predict upcoming discourse referents based on linguistic knowledge alone vs. linguistic knowledge jointly with common-sense knowledge in the form of scripts. We find that script knowledge significantly improves model estimates of human predictions. In a second study, we test the highly controversial hypothesis that predictability influences referring expression type but do not find evidence for such an effect.

@article{ashutoshTacl2016,
title = {Modeling Semantic Expectations: Using Script Knowledge for Referent Prediction},
author = {Ashutosh Modi and Ivan Titov and Vera Demberg and Asad Sayeed and Manfred Pinkal},
url = {https://aclanthology.org/Q17-1003},
doi = {https://doi.org/10.1162/tacl_a_00044},
year = {2016},
date = {2016},
journal = {Transactions of the Association for Computational Linguistics},
pages = {31-44},
publisher = {MIT Press},
address = {Cambridge, MA},
abstract = {Recent research in psycholinguistics has provided increasing evidence that humans predict upcoming content. Prediction also affects perception and might be a key to robustness in human language processing. In this paper, we investigate the factors that affect human prediction by building a computational model that can predict upcoming discourse referents based on linguistic knowledge alone vs. linguistic knowledge jointly with common-sense knowledge in the form of scripts. We find that script knowledge significantly improves model estimates of human predictions. In a second study, we test the highly controversial hypothesis that predictability influences referring expression type but do not find evidence for such an effect.},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   A3

Event Embeddings for Semantic Script Modeling Inproceedings

Proceedings of the Conference on Computational Natural Language Learning (CoNLL), Berlin, Germany, 2016.

Semantic scripts is a conceptual representation which defines how events are organized into higher level activities. Practically all the previous approaches to inducing script knowledge from text relied on count-based techniques (e.g., generative models) and have not attempted to compositionally model events. In this work, we introduce a neural network model which relies on distributed compositional representations of events. The model captures statistical dependencies between events in a scenario, overcomes some of the shortcomings of previous approaches (e.g., by more effectively dealing with data sparsity) and outperforms count-based counterparts on the narrative cloze task.

@inproceedings{modi:CONLL2016,
title = {Event Embeddings for Semantic Script Modeling},
author = {Ashutosh Modi},
url = {https://www.researchgate.net/publication/306093411_Event_Embeddings_for_Semantic_Script_Modeling},
year = {2016},
date = {2016-10-17},
booktitle = {Proceedings of the Conference on Computational Natural Language Learning (CoNLL)},
address = {Berlin, Germany},
abstract = {Semantic scripts is a conceptual representation which defines how events are organized into higher level activities. Practically all the previous approaches to inducing script knowledge from text relied on count-based techniques (e.g., generative models) and have not attempted to compositionally model events. In this work, we introduce a neural network model which relies on distributed compositional representations of events. The model captures statistical dependencies between events in a scenario, overcomes some of the shortcomings of previous approaches (e.g., by more effectively dealing with data sparsity) and outperforms count-based counterparts on the narrative cloze task.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A3

Modi, Ashutosh; Anikina, Tatjana; Ostermann, Simon; Pinkal, Manfred

InScript: Narrative texts annotated with script information Inproceedings

Calzolari, Nicoletta; Choukri, Khalid; Declerck, Thierry; Grobelnik, Marko; Maegaard, Bente; Mariani, Joseph; Moreno, Asuncion; Odijk, Jan; Piperidis, Stelios (Ed.): Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), European Language Resources Association (ELRA), pp. 3485-3493, Portorož, Slovenia, 2016, ISBN 978-2-9517408-9-1.

This paper presents the InScript corpus (Narrative Texts Instantiating Script structure). InScript is a corpus of 1,000 stories centered around 10 different scenarios. Verbs and noun phrases are annotated with event and participant types, respectively. Additionally, the text is annotated with coreference information. The corpus shows rich lexical variation and will serve as a unique resource for the study of the role of script knowledge in natural language processing.

@inproceedings{MODI16.352,
title = {InScript: Narrative texts annotated with script information},
author = {Ashutosh Modi and Tatjana Anikina and Simon Ostermann and Manfred Pinkal},
editor = {Nicoletta Calzolari and Khalid Choukri and Thierry Declerck and Marko Grobelnik and Bente Maegaard and Joseph Mariani and Asuncion Moreno and Jan Odijk and Stelios Piperidis},
url = {https://aclanthology.org/L16-1555},
year = {2016},
date = {2016-10-17},
booktitle = {Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)},
isbn = {978-2-9517408-9-1},
pages = {3485-3493},
publisher = {European Language Resources Association (ELRA)},
address = {Portoro{\v{z}, Slovenia},
abstract = {This paper presents the InScript corpus (Narrative Texts Instantiating Script structure). InScript is a corpus of 1,000 stories centered around 10 different scenarios. Verbs and noun phrases are annotated with event and participant types, respectively. Additionally, the text is annotated with coreference information. The corpus shows rich lexical variation and will serve as a unique resource for the study of the role of script knowledge in natural language processing.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A3

Zarcone, Alessandra; Padó, Sebastian; Lenci, Alessandro

Same Same but Different: Type and Typicality in a Distributional Model of Complement Coercion Inproceedings

Word Structure and Word Usage. Proceedings of the NetWordS Final Conference. Pisa, March 30-April 1, 2015, pp. 91-94, Pisa, Italy, 2015.
We aim to model the results from a selfpaced reading experiment, which tested the effect of semantic type clash and typicality on the processing of German complement coercion. We present two distributional semantic models to test if they can model the effect of both type and typicality in the psycholinguistic study. We show that one of the models, without explicitly representing type information, can account both for the effect of type and typicality in complement coercion.

@inproceedings{zarcone2015same,
title = {Same Same but Different: Type and Typicality in a Distributional Model of Complement Coercion},
author = {Alessandra Zarcone and Sebastian Padó and Alessandro Lenci},
url = {https://www.researchgate.net/publication/282740292_Same_same_but_different_Type_and_typicality_in_a_distributional_model_of_complement_coercion},
year = {2015},
date = {2015},
booktitle = {Word Structure and Word Usage. Proceedings of the NetWordS Final Conference. Pisa, March 30-April 1, 2015},
pages = {91-94},
address = {Pisa, Italy},
abstract = {

We aim to model the results from a selfpaced reading experiment, which tested the effect of semantic type clash and typicality on the processing of German complement coercion. We present two distributional semantic models to test if they can model the effect of both type and typicality in the psycholinguistic study. We show that one of the models, without explicitly representing type information, can account both for the effect of type and typicality in complement coercion.
},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Projects:   A2 A3

Demberg, Vera; Hoffmann, Jörg; Howcroft, David M.; Klakow, Dietrich; Torralba, Álvaro

Search Challenges in Natural Language Generation with Complex Optimization Objectives Journal Article

KI - Künstliche Intelligenz, Special Issue on Companion Technologies, Springer Berlin Heidelberg, 2015, ISSN 1610-1987.

Automatic natural language generation (NLG) is a difficult problem already when merely trying to come up with natural-sounding utterances. Ubiquituous applications, in particular companion technologies, pose the additional challenge of flexible adaptation to a user or a situation. This requires optimizing complex objectives such as information density, in combinatorial search spaces described using declarative input languages. We believe that AI search and planning is a natural match for these problems, and could substantially contribute to solving them effectively. We illustrate this using a concrete example NLG framework, give a summary of the relevant optimization objectives, and provide an initial list of research challenges.

@article{demberg:hoffmann:ki-2015,
title = {Search Challenges in Natural Language Generation with Complex Optimization Objectives},
author = {Vera Demberg and J{\"o}rg Hoffmann and David M. Howcroft and Dietrich Klakow and {\'A}lvaro Torralba},
url = {https://link.springer.com/article/10.1007/s13218-015-0409-5},
year = {2015},
date = {2015},
journal = {KI - K{\"u}nstliche Intelligenz, Special Issue on Companion Technologies},
publisher = {Springer Berlin Heidelberg},
abstract = {

Automatic natural language generation (NLG) is a difficult problem already when merely trying to come up with natural-sounding utterances. Ubiquituous applications, in particular companion technologies, pose the additional challenge of flexible adaptation to a user or a situation. This requires optimizing complex objectives such as information density, in combinatorial search spaces described using declarative input languages. We believe that AI search and planning is a natural match for these problems, and could substantially contribute to solving them effectively. We illustrate this using a concrete example NLG framework, give a summary of the relevant optimization objectives, and provide an initial list of research challenges.
},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   A3

Kravtchenko, Ekaterina; Demberg, Vera

Semantically Underinformative Utterances Trigger Pragmatic Inferences Proceeding

Annual Conference of the Cognitive Science Society (CogSci), Mind, Technology, and Society Pasadena Convention Center, 2015.
Most theories of pragmatics and language processing predict that speakers avoid informationally redundant utterances. From a processing standpoint, it remains unclear what happens when listeners encounter such utterances, and how they interpret them. We argue that uninformative utterances can trigger pragmatic inferences, which increase utterance utility in line with listener expectations. In this study, we look at utterances that refer to stereotyped event sequences describing common activities (scripts). Literature on processing of event sequences shows that people automatically infer component actions, once a script is ‘invoked.’ We demonstrate that when comprehenders encounter utterances describing events that can be easily inferred from prior context, they interpret them as signifying that the event conveys new, unstated information. We also suggest that formal models of language comprehension would have difficulty in accurately estimating the predictability or potential processing cost incurred by such utterances.

@proceeding{kravtchenko:demberg,
title = {Semantically Underinformative Utterances Trigger Pragmatic Inferences},
author = {Ekaterina Kravtchenko and Vera Demberg},
url = {https://www.semanticscholar.org/paper/Semantically-underinformative-utterances-trigger-Kravtchenko-Demberg/33256a5fca918eef5de8998db5a695d9bced5975},
year = {2015},
date = {2015-10-17},
booktitle = {Annual Conference of the Cognitive Science Society (CogSci)},
address = {Mind, Technology, and Society Pasadena Convention Center},
abstract = {

Most theories of pragmatics and language processing predict that speakers avoid informationally redundant utterances. From a processing standpoint, it remains unclear what happens when listeners encounter such utterances, and how they interpret them. We argue that uninformative utterances can trigger pragmatic inferences, which increase utterance utility in line with listener expectations. In this study, we look at utterances that refer to stereotyped event sequences describing common activities (scripts). Literature on processing of event sequences shows that people automatically infer component actions, once a script is ‘invoked.’ We demonstrate that when comprehenders encounter utterances describing events that can be easily inferred from prior context, they interpret them as signifying that the event conveys new, unstated information. We also suggest that formal models of language comprehension would have difficulty in accurately estimating the predictability or potential processing cost incurred by such utterances.
},
pubstate = {published},
type = {proceeding}
}

Copy BibTeX to Clipboard

Project:   A3

Batiukova, Olga; Bertinetto, Pier Marco; Lenci, Alessandro; Zarcone, Alessandra

Identifying Actional Features Through Semantic Priming: Cross-Romance Comparison Incollection

Taming the TAME systems. Cahiers Chronos 27, Rodopi, pp. 161-187, Amsterdam/Philadelphia, 2015.
This paper reports four semantic priming experiments in Italian and Spanish, whose goal was to verify the psychological reality of two aspectual features, resultativity and durativity. In the durativity task, the participants were asked whether the verb referred to a durable situation, in the resultativity task if it denoted a situation with a clear outcome. The results prove that both features are involved in online processing of the verb meaning: achievements ([+resultative, -durative]) and activities ([-resultative, +durative]) were processed faster in certain priming contexts. The priming patterns in the Romance languages present some striking similarities (only achievements were primed in the resultativity task) alongside some intriguing differences, and interestingly contrast with the behaviour of another language tested, Russian, whose aspectual system differs in significant ways.

@incollection{batiukova2015identifying,
title = {Identifying Actional Features Through Semantic Priming: Cross-Romance Comparison},
author = {Olga Batiukova and Pier Marco Bertinetto and Alessandro Lenci and Alessandra Zarcone},
url = {https://brill.com/display/book/edcoll/9789004292772/B9789004292772-s010.xml},
year = {2015},
date = {2015},
booktitle = {Taming the TAME systems. Cahiers Chronos 27},
pages = {161-187},
publisher = {Rodopi},
address = {Amsterdam/Philadelphia},
abstract = {

This paper reports four semantic priming experiments in Italian and Spanish, whose goal was to verify the psychological reality of two aspectual features, resultativity and durativity. In the durativity task, the participants were asked whether the verb referred to a durable situation, in the resultativity task if it denoted a situation with a clear outcome. The results prove that both features are involved in online processing of the verb meaning: achievements ([+resultative, -durative]) and activities ([-resultative, +durative]) were processed faster in certain priming contexts. The priming patterns in the Romance languages present some striking similarities (only achievements were primed in the resultativity task) alongside some intriguing differences, and interestingly contrast with the behaviour of another language tested, Russian, whose aspectual system differs in significant ways.
},
pubstate = {published},
type = {incollection}
}

Copy BibTeX to Clipboard

Projects:   A2 A3

Rudinger, Rachel; Demberg, Vera; Modi, Ashutosh; Van Durme, Benjamin; Pinkal, Manfred

Learning to Predict Script Events from Domain-Specific Text Journal Article

Lexical and Computational Semantics (* SEM 2015), pp. 205-210, 2015.

The automatic induction of scripts (Schank and Abelson, 1977) has been the focus of many recent works. In this paper, we employ a variety of these methods to learn Schank and Abelson’s canonical restaurant script, using a novel dataset of restaurant narratives we have compiled from a website called “Dinners from Hell.” Our models learn narrative chains, script-like structures that we evaluate with the “narrative cloze” task (Chambers and Jurafsky, 2008).

@article{rudinger2015learning,
title = {Learning to Predict Script Events from Domain-Specific Text},
author = {Rachel Rudinger and Vera Demberg and Ashutosh Modi and Benjamin Van Durme and Manfred Pinkal},
url = {http://www.aclweb.org/anthology/S15-1024},
year = {2015},
date = {2015},
journal = {Lexical and Computational Semantics (* SEM 2015)},
pages = {205-210},
abstract = {The automatic induction of scripts (Schank and Abelson, 1977) has been the focus of many recent works. In this paper, we employ a variety of these methods to learn Schank and Abelson’s canonical restaurant script, using a novel dataset of restaurant narratives we have compiled from a website called “Dinners from Hell.” Our models learn narrative chains, script-like structures that we evaluate with the “narrative cloze” task (Chambers and Jurafsky, 2008).},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   A3

White, Michael; Howcroft, David M.

Inducing Clause-Combining Rules: A Case Study with the SPaRKy Restaurant Corpus Inproceedings

Proc. of the 15th European Workshop on Natural Language Generation, Association for Computational Linguistics, Brighton, England, UK, 2015.

We describe an algorithm for inducing clause-combining rules for use in a traditional natural language generation architecture. An experiment pairing lexicalized text plans from the SPaRKy Restaurant Corpus with logical forms obtained by parsing the corresponding sentences demonstrates that the approach is able to learn clause-combining operations which have essentially the same coverage as those used in the SPaRKy Restaurant Corpus. This paper fills a gap in the literature, showing that it is possible to learn microplanning rules for both aggregation and discourse connective insertion, an important step towards ameliorating the knowledge acquisition bottleneck for NLG systems that produce texts with rich discourse structures using traditional architectures.

@inproceedings{white:howcroft:enlg-2015,
title = {Inducing Clause-Combining Rules: A Case Study with the SPaRKy Restaurant Corpus},
author = {Michael White and David M. Howcroft},
url = {http://www.aclweb.org/anthology/W15-4704},
year = {2015},
date = {2015},
booktitle = {Proc. of the 15th European Workshop on Natural Language Generation},
publisher = {Association for Computational Linguistics},
address = {Brighton, England, UK},
abstract = {We describe an algorithm for inducing clause-combining rules for use in a traditional natural language generation architecture. An experiment pairing lexicalized text plans from the SPaRKy Restaurant Corpus with logical forms obtained by parsing the corresponding sentences demonstrates that the approach is able to learn clause-combining operations which have essentially the same coverage as those used in the SPaRKy Restaurant Corpus. This paper fills a gap in the literature, showing that it is possible to learn microplanning rules for both aggregation and discourse connective insertion, an important step towards ameliorating the knowledge acquisition bottleneck for NLG systems that produce texts with rich discourse structures using traditional architectures.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A3

Crocker, Matthew W.; Demberg, Vera; Teich, Elke

Information Density and Linguistic Encoding (IDeaL) Journal Article

KI - Künstliche Intelligenz, 30, pp. 77-81, 2015.

We introduce IDEAL (Information Density and Linguistic Encoding), a collaborative research center that investigates the hypothesis that language use may be driven by the optimal use of the communication channel. From the point of view of linguistics, our approach promises to shed light on selected aspects of language variation that are hitherto not sufficiently explained. Applications of our research can be envisaged in various areas of natural language processing and AI, including machine translation, text generation, speech synthesis and multimodal interfaces.

@article{crocker:demberg:teich,
title = {Information Density and Linguistic Encoding (IDeaL)},
author = {Matthew W. Crocker and Vera Demberg and Elke Teich},
url = {http://link.springer.com/article/10.1007/s13218-015-0391-y/fulltext.html},
doi = {https://doi.org/10.1007/s13218-015-0391-y},
year = {2015},
date = {2015},
journal = {KI - K{\"u}nstliche Intelligenz},
pages = {77-81},
volume = {30},
number = {1},
abstract = {

We introduce IDEAL (Information Density and Linguistic Encoding), a collaborative research center that investigates the hypothesis that language use may be driven by the optimal use of the communication channel. From the point of view of linguistics, our approach promises to shed light on selected aspects of language variation that are hitherto not sufficiently explained. Applications of our research can be envisaged in various areas of natural language processing and AI, including machine translation, text generation, speech synthesis and multimodal interfaces.
},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Projects:   A1 A3 B1

Successfully