Publications

Vogels, Jorrig; Demberg, Vera; Kray, Jutta

The index of cognitive activity as a measure of cognitive processing load in dual task settings Journal Article

Frontiers in Psychololgy, 9, pp. 2276, 2018.

Increases in pupil size have long been used as an indicator of cognitive load. Recently, the Index of Cognitive Activity (ICA), a novel pupillometric measure has received increased attention. The ICA measures the frequency of rapid pupil dilations, and is an interesting complementary measure to overall pupil size because it disentangles the pupil response to cognitive activity from effects of light input. As such, it has been evaluated as a useful measure of processing load in dual task settings coordinating language comprehension and driving. However, the cognitive underpinnings of pupillometry, and any differences between rapid small dilations as measured by the ICA and overall effects on pupil size are still poorly understood. Earlier work has observed that the ICA and overall pupil size may not always behave in the same way, reporting an increase in overall pupil size but decrease in ICA in a dual task setting. To further investigate this, we systematically tested two new dual-task combinations, combining both language comprehension and simulated driving with a memory task. Our findings confirm that more difficult linguistic processing is reflected in a larger ICA. More importantly, however, the dual task settings did not result in an increase in the ICA as compared to the single task, and, consistent with earlier findings, showed a significant decrease with a more difficult secondary task. This contrasts with our findings for pupil size, which showed an increase with greater secondary task difficulty in both tasks. Our results are compatible with the idea that although both pupillometry measures are indicators of cognitive load, they reflect different cognitive and neuronal processes in dual task situations.

@article{Vogels2018,
title = {The index of cognitive activity as a measure of cognitive processing load in dual task settings},
author = {Jorrig Vogels and Vera Demberg and Jutta Kray},
url = {https://doi.org/10.3389/fpsyg.2018.02276},
doi = {https://doi.org/10.3389/fpsyg.2018.02276},
year = {2018},
date = {2018},
journal = {Frontiers in Psychololgy},
pages = {2276},
volume = {9},
abstract = {

Increases in pupil size have long been used as an indicator of cognitive load. Recently, the Index of Cognitive Activity (ICA), a novel pupillometric measure has received increased attention. The ICA measures the frequency of rapid pupil dilations, and is an interesting complementary measure to overall pupil size because it disentangles the pupil response to cognitive activity from effects of light input. As such, it has been evaluated as a useful measure of processing load in dual task settings coordinating language comprehension and driving. However, the cognitive underpinnings of pupillometry, and any differences between rapid small dilations as measured by the ICA and overall effects on pupil size are still poorly understood. Earlier work has observed that the ICA and overall pupil size may not always behave in the same way, reporting an increase in overall pupil size but decrease in ICA in a dual task setting. To further investigate this, we systematically tested two new dual-task combinations, combining both language comprehension and simulated driving with a memory task. Our findings confirm that more difficult linguistic processing is reflected in a larger ICA. More importantly, however, the dual task settings did not result in an increase in the ICA as compared to the single task, and, consistent with earlier findings, showed a significant decrease with a more difficult secondary task. This contrasts with our findings for pupil size, which showed an increase with greater secondary task difficulty in both tasks. Our results are compatible with the idea that although both pupillometry measures are indicators of cognitive load, they reflect different cognitive and neuronal processes in dual task situations.

},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   A4

Häuser, Katja; Demberg, Vera; Kray, Jutta

Surprisal modulates dual-task performance in older adults: Pupillometry shows age-related trade-offs in task performance and time-course of language processing Journal Article

Psychology and Aging, 33, pp. 1168-1180, 2018.

Even though older adults are known to have difficulty at language processing when a secondary task has to be performed simultaneously, few studies have addressed how older adults process language in dual-task demands when linguistic load is systematically varied. Here, we manipulated surprisal, an information theoretic measure that quantifies the amount of new information conveyed by a word, to investigate how linguistic load affects younger and older adults during early and late stages of sentence processing under conditions when attention is split between two tasks. In high-surprisal sentences, target words were implausible and mismatched with semantic expectancies based on context, thereby causing integration difficulty. Participants performed semantic meaningfulness judgments on sentences that were presented in isolation (single task) or while performing a secondary tracking task (dual task). Cognitive load was measured by means of pupillometry. Mixed-effects models were fit to the data, showing the following: (a) During the dual task, younger but not older adults demonstrated early sensitivity to surprisal (higher levels of cognitive load, indexed by pupil size) as sentences were heard online; (b) Older adults showed no immediate reaction to surprisal, but a delayed response, where their meaningfulness judgments to high-surprisal words remained stable in accuracy, while secondary tracking performance declined. Findings are discussed in relation to age-related trade-offs in dual tasking and differences in the allocation of attentional resources during language processing. Collectively, our data show that higher linguistic load leads to task trade-offs in older adults and differently affects the time course of online language processing in aging.

@article{Häuser2018,
title = {Surprisal modulates dual-task performance in older adults: Pupillometry shows age-related trade-offs in task performance and time-course of language processing},
author = {Katja H{\"a}user and Vera Demberg and Jutta Kray},
url = {https://pubmed.ncbi.nlm.nih.gov/30550333/},
doi = {https://doi.org/10.1037/pag0000316},
year = {2018},
date = {2018-10-17},
journal = {Psychology and Aging},
pages = {1168-1180},
volume = {33},
number = {8},
abstract = {

Even though older adults are known to have difficulty at language processing when a secondary task has to be performed simultaneously, few studies have addressed how older adults process language in dual-task demands when linguistic load is systematically varied. Here, we manipulated surprisal, an information theoretic measure that quantifies the amount of new information conveyed by a word, to investigate how linguistic load affects younger and older adults during early and late stages of sentence processing under conditions when attention is split between two tasks. In high-surprisal sentences, target words were implausible and mismatched with semantic expectancies based on context, thereby causing integration difficulty. Participants performed semantic meaningfulness judgments on sentences that were presented in isolation (single task) or while performing a secondary tracking task (dual task). Cognitive load was measured by means of pupillometry. Mixed-effects models were fit to the data, showing the following: (a) During the dual task, younger but not older adults demonstrated early sensitivity to surprisal (higher levels of cognitive load, indexed by pupil size) as sentences were heard online; (b) Older adults showed no immediate reaction to surprisal, but a delayed response, where their meaningfulness judgments to high-surprisal words remained stable in accuracy, while secondary tracking performance declined. Findings are discussed in relation to age-related trade-offs in dual tasking and differences in the allocation of attentional resources during language processing. Collectively, our data show that higher linguistic load leads to task trade-offs in older adults and differently affects the time course of online language processing in aging.
},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   A4

Howcroft, David M.; Klakow, Dietrich; Demberg, Vera

Toward Bayesian Synchronous Tree Substitution Grammars for Sentence Planning Inproceedings

Proceedings of the 11th International Conference on Natural Language Generation, Association for Computational Linguistics, pp. 391-396, Tilburg, The Netherlands, 2018.

Developing conventional natural language generation systems requires extensive attention from human experts in order to craft complex sets of sentence planning rules. We propose a Bayesian nonparametric approach to learn sentence planning rules by inducing synchronous tree substitution grammars for pairs of text plans and morphosyntactically-specified dependency trees. Our system is able to learn rules which can be used to generate novel texts after training on small datasets.

@inproceedings{Howcroft2018,
title = {Toward Bayesian Synchronous Tree Substitution Grammars for Sentence Planning},
author = {David M. Howcroft and Dietrich Klakow and Vera Demberg},
url = {https://aclanthology.org/W18-6546},
doi = {https://doi.org/10.18653/v1/W18-6546},
year = {2018},
date = {2018},
booktitle = {Proceedings of the 11th International Conference on Natural Language Generation},
pages = {391-396},
publisher = {Association for Computational Linguistics},
address = {Tilburg, The Netherlands},
abstract = {Developing conventional natural language generation systems requires extensive attention from human experts in order to craft complex sets of sentence planning rules. We propose a Bayesian nonparametric approach to learn sentence planning rules by inducing synchronous tree substitution grammars for pairs of text plans and morphosyntactically-specified dependency trees. Our system is able to learn rules which can be used to generate novel texts after training on small datasets.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A4

Shen, Xiaoyu; Su, Hui; Niu, Shuzi; Demberg, Vera

Improving Variational Encoder-Decoders in Dialogue Generation Inproceedings

32nd AAAI Conference on Artificial Intelligence (AAAI-18), New Orleans, USA, 2018.

Variational encoder-decoders (VEDs) have shown promising results in dialogue generation. However, the latent variable distributions are usually approximated by a much simpler model than the powerful RNN structure used for encoding and decoding, yielding the KL-vanishing problem and inconsistent training objective. In this paper, we separate the training step into two phases: The first phase learns to autoencode discrete texts into continuous embeddings, from which the second phase learns to generalize latent representations by reconstructing the encoded embedding. In this case, latent variables are sampled by transforming Gaussian noise through multi-layer perceptrons and are trained with a separate VED model, which has the potential of realizing a much more flexible distribution. We compare our model with current popular models and the experiment demonstrates substantial improvement in both metric-based and human evaluations.

@inproceedings{Shen2018,
title = {Improving Variational Encoder-Decoders in Dialogue Generation},
author = {Xiaoyu Shen and Hui Su and Shuzi Niu and Vera Demberg},
url = {https://arxiv.org/abs/1802.02032},
year = {2018},
date = {2018-02-02},
publisher = {32nd AAAI Conference on Artificial Intelligence (AAAI-18)},
address = {New Orleans, USA},
abstract = {Variational encoder-decoders (VEDs) have shown promising results in dialogue generation. However, the latent variable distributions are usually approximated by a much simpler model than the powerful RNN structure used for encoding and decoding, yielding the KL-vanishing problem and inconsistent training objective. In this paper, we separate the training step into two phases: The first phase learns to autoencode discrete texts into continuous embeddings, from which the second phase learns to generalize latent representations by reconstructing the encoded embedding. In this case, latent variables are sampled by transforming Gaussian noise through multi-layer perceptrons and are trained with a separate VED model, which has the potential of realizing a much more flexible distribution. We compare our model with current popular models and the experiment demonstrates substantial improvement in both metric-based and human evaluations.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A4

Vogels, Jorrig; Howcroft, David M.; Demberg, Vera

Referential overspecification in response to the listener's cognitive load Inproceedings

International Cognitive Linguistics Conference, Tarttu, Estonia, 2017.

According to the Uniform Information Density hypothesis (UID; Jaeger 2010, inter alia), speakers strive to distribute information equally over their utterances. They do this to avoid both peaks and troughs in information density, which may lead to processing difficulty for the listener. Several studies have shown how speakers consistently make linguistic choices that result in a more equal distribution of information (e.g., Jaeger 2010, Mahowald, Fedorenko, Piantadosi, & Gibson 2013, Piantadosi, Tily, & Gibson 2011). However, it is not clear whether speakers also adapt the information density of their utterances to the processing capacity of a specific addressee. For example, when the addressee is involved in a difficult task that is clearly reducing his cognitive capacity for processing linguistic information, will the speaker lower the overall information density of her utterances to accommodate the reduced processing capacity?

@inproceedings{Vogels2017,
title = {Referential overspecification in response to the listener's cognitive load},
author = {Jorrig Vogels and David M. Howcroft and Vera Demberg},
year = {2017},
date = {2017},
publisher = {International Cognitive Linguistics Conference},
address = {Tarttu, Estonia},
abstract = {According to the Uniform Information Density hypothesis (UID; Jaeger 2010, inter alia), speakers strive to distribute information equally over their utterances. They do this to avoid both peaks and troughs in information density, which may lead to processing difficulty for the listener. Several studies have shown how speakers consistently make linguistic choices that result in a more equal distribution of information (e.g., Jaeger 2010, Mahowald, Fedorenko, Piantadosi, & Gibson 2013, Piantadosi, Tily, & Gibson 2011). However, it is not clear whether speakers also adapt the information density of their utterances to the processing capacity of a specific addressee. For example, when the addressee is involved in a difficult task that is clearly reducing his cognitive capacity for processing linguistic information, will the speaker lower the overall information density of her utterances to accommodate the reduced processing capacity?},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A4

Häuser, Katja; Demberg, Vera; Kray, Jutta

Age-differences in recovery from prediction error: Evidence from a simulated driving and combined sentence verification task. Inproceedings

39th Annual Meeting of the Cognitive Science Society, 2017.

@inproceedings{Häuser2017,
title = {Age-differences in recovery from prediction error: Evidence from a simulated driving and combined sentence verification task.},
author = {Katja H{\"a}user and Vera Demberg and Jutta Kray},
year = {2017},
date = {2017-10-17},
publisher = {39th Annual Meeting of the Cognitive Science Society},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A4

Howcroft, David M.; Klakow, Dietrich; Demberg, Vera

The Extended SPaRKy Restaurant Corpus: Designing a Corpus with Variable Information Density Inproceedings

Proc. Interspeech 2017, pp. 3757-3761, 2017.

Natural language generation (NLG) systems rely on corpora for both hand-crafted approaches in a traditional NLG architecture and for statistical end-to-end (learned) generation systems. Limitations in existing resources, however, make it difficult to develop systems which can vary the linguistic properties of an utterance as needed. For example, when users’ attention is split between a linguistic and a secondary task such as driving, a generation system may need to reduce the information density of an utterance to compensate for the reduction in user attention. We introduce a new corpus in the restaurant recommendation and comparison domain, collected in a paraphrasing paradigm, where subjects wrote texts targeting either a general audience or an elderly family member. This design resulted in a corpus of more than 5000 texts which exhibit a variety of lexical and syntactic choices and differ with respect to average word & sentence length and surprisal. The corpus includes two levels of meaning representation: flat ‘semantic stacks’ for propositional content and Rhetorical Structure Theory (RST) relations between these propositions.

@inproceedings{Howcroft2017b,
title = {The Extended SPaRKy Restaurant Corpus: Designing a Corpus with Variable Information Density},
author = {David M. Howcroft and Dietrich Klakow and Vera Demberg},
url = {http://dx.doi.org/10.21437/Interspeech.2017-1555},
doi = {https://doi.org/10.21437/Interspeech.2017-1555},
year = {2017},
date = {2017-10-17},
booktitle = {Proc. Interspeech 2017},
pages = {3757-3761},
abstract = {Natural language generation (NLG) systems rely on corpora for both hand-crafted approaches in a traditional NLG architecture and for statistical end-to-end (learned) generation systems. Limitations in existing resources, however, make it difficult to develop systems which can vary the linguistic properties of an utterance as needed. For example, when users’ attention is split between a linguistic and a secondary task such as driving, a generation system may need to reduce the information density of an utterance to compensate for the reduction in user attention. We introduce a new corpus in the restaurant recommendation and comparison domain, collected in a paraphrasing paradigm, where subjects wrote texts targeting either a general audience or an elderly family member. This design resulted in a corpus of more than 5000 texts which exhibit a variety of lexical and syntactic choices and differ with respect to average word & sentence length and surprisal. The corpus includes two levels of meaning representation: flat ‘semantic stacks’ for propositional content and Rhetorical Structure Theory (RST) relations between these propositions.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A4

Howcroft, David M.; Vogels, Jorrig; Demberg, Vera

G-TUNA: a corpus of referring expressions in German, including duration information Inproceedings

Proceedings of the 10th International Conference on Natural Language Generation, Association for Computational Linguistics, pp. 149-153, Santiago de Compostela, Spain, 2017.

Corpora of referring expressions elicited from human participants in a controlled environment are an important resource for research on automatic referring expression generation. We here present G-TUNA, a new corpus of referring expressions for German. Using the furniture stimuli set developed for the TUNA and D-TUNA corpora, our corpus extends on these corpora by providing data collected in a simulated driving dual-task setting, and additionally provides exact duration annotations for the spoken referring expressions. This corpus will hence allow researchers to analyze the interaction between referring expression length and speech rate, under conditions where the listener is under high vs. low cognitive load.

@inproceedings{W17-3522,
title = {G-TUNA: a corpus of referring expressions in German, including duration information},
author = {David M. Howcroft and Jorrig Vogels and Vera Demberg},
url = {http://www.aclweb.org/anthology/W17-3522},
doi = {https://doi.org/10.18653/v1/W17-3522},
year = {2017},
date = {2017},
booktitle = {Proceedings of the 10th International Conference on Natural Language Generation},
pages = {149-153},
publisher = {Association for Computational Linguistics},
address = {Santiago de Compostela, Spain},
abstract = {Corpora of referring expressions elicited from human participants in a controlled environment are an important resource for research on automatic referring expression generation. We here present G-TUNA, a new corpus of referring expressions for German. Using the furniture stimuli set developed for the TUNA and D-TUNA corpora, our corpus extends on these corpora by providing data collected in a simulated driving dual-task setting, and additionally provides exact duration annotations for the spoken referring expressions. This corpus will hence allow researchers to analyze the interaction between referring expression length and speech rate, under conditions where the listener is under high vs. low cognitive load.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A4

Howcroft, David M.; Demberg, Vera

Psycholinguistic Models of Sentence Processing Improve Sentence Readability Ranking Inproceedings

Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, Association for Computational Linguistics, pp. 958-968, Valencia, Spain, 2017.

While previous research on readability has typically focused on document-level measures, recent work in areas such as natural language generation has pointed out the need of sentence-level readability measures. Much of psycholinguistics has focused for many years on processing measures that provide difficulty estimates on a word-by-word basis. However, these psycholinguistic measures have not yet been tested on sentence readability ranking tasks. In this paper, we use four psycholinguistic measures: idea density, surprisal, integration cost, and embedding depth to test whether these features are predictive of readability levels. We find that psycholinguistic features significantly improve performance by up to 3 percentage points over a standard document-level readability metric baseline.

@inproceedings{Howcroft2017,
title = {Psycholinguistic Models of Sentence Processing Improve Sentence Readability Ranking},
author = {David M. Howcroft and Vera Demberg},
url = {http://www.aclweb.org/anthology/E17-1090},
year = {2017},
date = {2017-10-17},
booktitle = {Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers},
pages = {958-968},
publisher = {Association for Computational Linguistics},
address = {Valencia, Spain},
abstract = {While previous research on readability has typically focused on document-level measures, recent work in areas such as natural language generation has pointed out the need of sentence-level readability measures. Much of psycholinguistics has focused for many years on processing measures that provide difficulty estimates on a word-by-word basis. However, these psycholinguistic measures have not yet been tested on sentence readability ranking tasks. In this paper, we use four psycholinguistic measures: idea density, surprisal, integration cost, and embedding depth to test whether these features are predictive of readability levels. We find that psycholinguistic features significantly improve performance by up to 3 percentage points over a standard document-level readability metric baseline.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A4

Schwenger, Maximilian; Torralba, Álvaro; Hoffmann, Jörg; Howcroft, David M.; Demberg, Vera

From OpenCCG to AI Planning: Detecting Infeasible Edges in Sentence Generation Inproceedings

Calzolari, Nicoletta; Matsumoto, Yuji; Prasad, Rashmi (Ed.): COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, ACL, pp. 1524-1534, Osaka, 2016, ISBN 978-4-87974-702-0.

The search space in grammar-based natural language generation tasks can get very large, which is particularly problematic when generating long utterances or paragraphs. Using surface realization with OpenCCG as an example, we show that we can effectively detect partial solutions (edges) which cannot ultimately be part of a complete sentence because of their syntactic category. Formulating the completion of an edge into a sentence as finding a solution path in a large state-transition system, we demonstrate a connection to AI Planning which is concerned with this kind of problem. We design a compilation from OpenCCG into AI Planning allowing the detection of infeasible edges via AI Planning dead-end detection methods (proving the absence of a solution to the compilation). Our experiments show that this can filter out large fractions of infeasible edges in, and thus benefit the performance of, complex realization processes.

@inproceedings{DBLP:conf/coling/SchwengerTHHD16,
title = {From OpenCCG to AI Planning: Detecting Infeasible Edges in Sentence Generation},
author = {Maximilian Schwenger and {\'A}lvaro Torralba and J{\"o}rg Hoffmann and David M. Howcroft and Vera Demberg},
editor = {Nicoletta Calzolari and Yuji Matsumoto and Rashmi Prasad},
url = {https://davehowcroft.com/publication/2016-12_coling_detecting-infeasible-edges/},
year = {2016},
date = {2016-12-01},
booktitle = {COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers},
isbn = {978-4-87974-702-0},
pages = {1524-1534},
publisher = {ACL},
address = {Osaka},
abstract = {The search space in grammar-based natural language generation tasks can get very large, which is particularly problematic when generating long utterances or paragraphs. Using surface realization with OpenCCG as an example, we show that we can effectively detect partial solutions (edges) which cannot ultimately be part of a complete sentence because of their syntactic category. Formulating the completion of an edge into a sentence as finding a solution path in a large state-transition system, we demonstrate a connection to AI Planning which is concerned with this kind of problem. We design a compilation from OpenCCG into AI Planning allowing the detection of infeasible edges via AI Planning dead-end detection methods (proving the absence of a solution to the compilation). Our experiments show that this can filter out large fractions of infeasible edges in, and thus benefit the performance of, complex realization processes.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A4

Zarcone, Alessandra; van Schijndel, Marten; Vogels, Jorrig; Demberg, Vera

Salience and attention in surprisal-based accounts of language processing Journal Article

Frontiers in Psychology, 7, 2016, ISSN 1664-1078.

The notion of salience has been singled out as the explanatory factor for a diverse range of linguistic phenomena. In particular, perceptual salience (e.g., visual salience of objects in the world, acoustic prominence of linguistic sounds) and semantic-pragmatic salience (e.g., prominence of recently mentioned or topical referents) have been shown to influence language comprehension and production. A different line of research has sought to account for behavioral correlates of cognitive load during comprehension as well as for certain patterns in language usage using information-theoretic notions, such as surprisal. Surprisal and salience both affect language processing at different levels, but the relationship between the two has not been adequately elucidated, and the question of whether salience can be reduced to surprisal / predictability is still open. Our review identifies two main challenges in addressing this question: terminological inconsistency and lack of integration between high and low levels of representations in salience-based accounts and surprisal-based accounts. We capitalize upon work in visual cognition in order to orient ourselves in surveying the different facets of the notion of salience in linguistics and their relation with models of surprisal. We find that work on salience highlights aspects of linguistic communication that models of surprisal tend to overlook, namely the role of attention and relevance to current goals, and we argue that the Predictive Coding framework provides a unified view which can account for the role played by attention and predictability at different levels of processing and which can clarify the interplay between low and high levels of processes and between predictability-driven expectation and attention-driven focus.

@article{Zarcone2016,
title = {Salience and attention in surprisal-based accounts of language processing},
author = {Alessandra Zarcone and Marten van Schijndel and Jorrig Vogels and Vera Demberg},
url = {http://www.frontiersin.org/language_sciences/10.3389/fpsyg.2016.00844/abstract},
doi = {https://doi.org/10.3389/fpsyg.2016.00844},
year = {2016},
date = {2016},
journal = {Frontiers in Psychology},
volume = {7},
number = {844},
abstract = {

The notion of salience has been singled out as the explanatory factor for a diverse range of linguistic phenomena. In particular, perceptual salience (e.g., visual salience of objects in the world, acoustic prominence of linguistic sounds) and semantic-pragmatic salience (e.g., prominence of recently mentioned or topical referents) have been shown to influence language comprehension and production. A different line of research has sought to account for behavioral correlates of cognitive load during comprehension as well as for certain patterns in language usage using information-theoretic notions, such as surprisal. Surprisal and salience both affect language processing at different levels, but the relationship between the two has not been adequately elucidated, and the question of whether salience can be reduced to surprisal / predictability is still open. Our review identifies two main challenges in addressing this question: terminological inconsistency and lack of integration between high and low levels of representations in salience-based accounts and surprisal-based accounts. We capitalize upon work in visual cognition in order to orient ourselves in surveying the different facets of the notion of salience in linguistics and their relation with models of surprisal. We find that work on salience highlights aspects of linguistic communication that models of surprisal tend to overlook, namely the role of attention and relevance to current goals, and we argue that the Predictive Coding framework provides a unified view which can account for the role played by attention and predictability at different levels of processing and which can clarify the interplay between low and high levels of processes and between predictability-driven expectation and attention-driven focus.
},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Projects:   A3 A4

Ahrendt, Simon; Demberg, Vera

Improving event prediction by representing script participants Inproceedings

Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics, pp. 546-551, San Diego, California, 2016.

Automatically learning script knowledge has proved difficult, with previous work not or just barely beating a most-frequent baseline. Script knowledge is a type of world knowledge which can however be useful for various task in NLP and psycholinguistic modelling. We here propose a model that includes participant information (i.e., knowledge about which participants are relevant for a script) and show, on the Dinners from Hell corpus as well as the InScript corpus, that this knowledge helps us to significantly improve prediction performance on the narrative cloze task.

@inproceedings{ahrendt-demberg:2016:N16-1,
title = {Improving event prediction by representing script participants},
author = {Simon Ahrendt and Vera Demberg},
url = {http://www.aclweb.org/anthology/N16-1067},
year = {2016},
date = {2016-06-01},
booktitle = {Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
pages = {546-551},
publisher = {Association for Computational Linguistics},
address = {San Diego, California},
abstract = {Automatically learning script knowledge has proved difficult, with previous work not or just barely beating a most-frequent baseline. Script knowledge is a type of world knowledge which can however be useful for various task in NLP and psycholinguistic modelling. We here propose a model that includes participant information (i.e., knowledge about which participants are relevant for a script) and show, on the Dinners from Hell corpus as well as the InScript corpus, that this knowledge helps us to significantly improve prediction performance on the narrative cloze task.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A4

Fischer, Andrea; Demberg, Vera; Klakow, Dietrich

Towards Flexible, Small-Domain Surface Generation: Combining Data-Driven and Grammatical Approaches Inproceedings

Proceedings of the 15th European Workshop on Natural Language Generation (ENLG), Association for Computational Linguistics, pp. 105-108, Brighton, England, UK, 2015.

As dialog systems are getting more and more ubiquitous, there is an increasing number of application domains for natural language generation, and generation objectives are getting more diverse (e.g., generating informationally dense vs. less complex utterances, as a function of target user and usage situation). Flexible generation is difficult and labourintensive with traditional template-based generation systems, while fully data-driven approaches may lead to less grammatical output, particularly if the measures used for generation objectives are correlated with measures of grammaticality. We here explore the combination of a data-driven approach with two very simple automatic grammar induction methods, basing its implementation on OpenCCG.

@inproceedings{fischer:demberg:klakow,
title = {Towards Flexible, Small-Domain Surface Generation: Combining Data-Driven and Grammatical Approaches},
author = {Andrea Fischer and Vera Demberg and Dietrich Klakow},
url = {https://www.aclweb.org/anthology/W15-4718/},
year = {2015},
date = {2015},
booktitle = {Proceedings of the 15th European Workshop on Natural Language Generation (ENLG)},
pages = {105-108},
publisher = {Association for Computational Linguistics},
address = {Brighton, England, UK},
abstract = {As dialog systems are getting more and more ubiquitous, there is an increasing number of application domains for natural language generation, and generation objectives are getting more diverse (e.g., generating informationally dense vs. less complex utterances, as a function of target user and usage situation). Flexible generation is difficult and labourintensive with traditional template-based generation systems, while fully data-driven approaches may lead to less grammatical output, particularly if the measures used for generation objectives are correlated with measures of grammaticality. We here explore the combination of a data-driven approach with two very simple automatic grammar induction methods, basing its implementation on OpenCCG.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Projects:   A4 C4

Howcroft, David M.; White, Michael

Inducing Clause-Combining Operations for Natural Language Generation Inproceedings

Proc. of the 1st International Workshop on Data-to-Text Generation, Edinburgh, Scotland, UK, 2015.

@inproceedings{howcroft:white:d2t-2015,
title = {Inducing Clause-Combining Operations for Natural Language Generation},
author = {David M. Howcroft and Michael White},
url = {http://www.macs.hw.ac.uk/InteractionLab/d2t/papers/d2t_HowcroftWhite},
year = {2015},
date = {2015},
booktitle = {Proc. of the 1st International Workshop on Data-to-Text Generation},
address = {Edinburgh, Scotland, UK},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A4

Vogels, Jorrig; Demberg, Vera; Kray, Jutta

Cognitive Load and Individual Differences in Multitasking Abilities Conference

Workshop on Individual differences in language processing across the adult life span, 2015.

@conference{vogelsjorrig2015cognitive,
title = {Cognitive Load and Individual Differences in Multitasking Abilities},
author = {Jorrig Vogels and Vera Demberg and Jutta Kray},
url = {https://www.bibsonomy.org/bibtex/222f7284011f7023bd8095b6b554278d3/sfb1102},
year = {2015},
date = {2015},
booktitle = {Workshop on Individual differences in language processing across the adult life span},
pubstate = {published},
type = {conference}
}

Copy BibTeX to Clipboard

Project:   A4

Kravtchenko, Ekaterina; Demberg, Vera

Underinformative Event Mentions Trigger Context-Dependent Implicatures Inproceedings

Talk presented at Formal and Experimental Pragmatics: Methodological Issues of a Nascent Liaison (MXPRAG), Zentrum für Allgemeine Sprachwissenschaft (ZAS), Berlin, June 2015, 2015.

@inproceedings{Kravtchenko2015b,
title = {Underinformative Event Mentions Trigger Context-Dependent Implicatures},
author = {Ekaterina Kravtchenko and Vera Demberg},
year = {2015},
date = {2015-10-17},
booktitle = {Talk presented at Formal and Experimental Pragmatics: Methodological Issues of a Nascent Liaison (MXPRAG)},
publisher = {Zentrum f{\"u}r Allgemeine Sprachwissenschaft (ZAS), Berlin, June 2015},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A4

Successfully