Publications

Fankhauser, Peter; Knappen, Jörg; Teich, Elke

Topical Diversification over Time in the Royal Society Corpus Inproceedings

Proceedings of Digital Humanities (DH'16)Proceedings of Digital Humanities (DH'16), Krakow, Poland, 2016.

Science gradually developed into an established sociocultural domain starting from the mid-17th century onwards. In this process it became increasingly specialized and diversified. Here, we investigate a particular aspect of specialization on the basis of probabilistic topic models. As a corpus we use the Royal Society Corpus (Khamis et al. 2015), which covers the period from 1665 to 1869 and contains 9015 documents.

@inproceedings{Fankhauser2016,
title = {Topical Diversification over Time in the Royal Society Corpus},
author = {Peter Fankhauser and J{\"o}rg Knappen and Elke Teich},
url = {https://www.semanticscholar.org/paper/Topical-Diversification-Over-Time-In-The-Royal-Fankhauser-Knappen/7f7dce0d0b8209d0c841c8da031614fccb97a787},
year = {2016},
date = {2016},
booktitle = {Proceedings of Digital Humanities (DH'16)},
address = {Krakow, Poland},
abstract = {Science gradually developed into an established sociocultural domain starting from the mid-17th century onwards. In this process it became increasingly specialized and diversified. Here, we investigate a particular aspect of specialization on the basis of probabilistic topic models. As a corpus we use the Royal Society Corpus (Khamis et al. 2015), which covers the period from 1665 to 1869 and contains 9015 documents.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B1

Kermes, Hannah; Knappen, Jörg; Khamis, Ashraf; Degaetano-Ortlieb, Stefania; Teich, Elke

The Royal Society Corpus. Towards a high-quality resource for studying diachronic variation in scientific writing Inproceedings

Proceedings of Digital Humanities (DH'16), Krakow, Poland, 2016.
We introduce a diachronic corpus of English scientific writing – the Royal Society Corpus (RSC) – adopting a middle ground between big and ‘poor’ and small and ‘rich’ data. The corpus has been built from an electronic version of the Transactions and Proceedings of the Royal Society of London and comprises c. 35 million tokens from the period 1665-1869 (see Table 1). The motivation for building a corpus from this material is to investigate the diachronic development of written scientific English.

@inproceedings{Kermes2016a,
title = {The Royal Society Corpus. Towards a high-quality resource for studying diachronic variation in scientific writing},
author = {Hannah Kermes and J{\"o}rg Knappen and Ashraf Khamis and Stefania Degaetano-Ortlieb and Elke Teich},
url = {https://www.researchgate.net/publication/331648262_The_Royal_Society_Corpus_Towards_a_high-quality_corpus_for_studying_diachronic_variation_in_scientific_writing},
year = {2016},
date = {2016},
booktitle = {Proceedings of Digital Humanities (DH'16)},
address = {Krakow, Poland},
abstract = {

We introduce a diachronic corpus of English scientific writing - the Royal Society Corpus (RSC) - adopting a middle ground between big and ‘poor’ and small and ‘rich’ data. The corpus has been built from an electronic version of the Transactions and Proceedings of the Royal Society of London and comprises c. 35 million tokens from the period 1665-1869 (see Table 1). The motivation for building a corpus from this material is to investigate the diachronic development of written scientific English.
},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B1

Degaetano-Ortlieb, Stefania; Teich, Elke

Information-based modeling of diachronic linguistic change: from typicality to productivity Inproceedings

Proceedings of Language Technologies for the Socio-Economic Sciences and Humanities (LATECH'16), Association for Computational Linguistics (ACL), Association for Computational Linguistics, pp. 165-173, Berlin, Germany, 2016.

We present a new approach for modeling diachronic linguistic change in grammatical usage. We illustrate the approach on English scientific writing in Late Modern English, focusing on grammatical patterns that are potentially indicative of shifts in register, genre and/or style. Commonly, diachronic change is characterized by the relative frequency of typical linguistic features over time. However, to fully capture changing linguistic usage, feature productivity needs to be taken into account as well. We introduce a data-driven approach for systematically detecting typical features and assessing their productivity over time, using information-theoretic
measures of entropy and surprisal.

@inproceedings{Degaetano-Ortlieb2016a,
title = {Information-based modeling of diachronic linguistic change: from typicality to productivity},
author = {Stefania Degaetano-Ortlieb and Elke Teich},
url = {https://aclanthology.org/W16-2121},
doi = {https://doi.org/10.18653/v1/W16-2121},
year = {2016},
date = {2016},
booktitle = {Proceedings of Language Technologies for the Socio-Economic Sciences and Humanities (LATECH'16), Association for Computational Linguistics (ACL)},
pages = {165-173},
publisher = {Association for Computational Linguistics},
address = {Berlin, Germany},
abstract = {We present a new approach for modeling diachronic linguistic change in grammatical usage. We illustrate the approach on English scientific writing in Late Modern English, focusing on grammatical patterns that are potentially indicative of shifts in register, genre and/or style. Commonly, diachronic change is characterized by the relative frequency of typical linguistic features over time. However, to fully capture changing linguistic usage, feature productivity needs to be taken into account as well. We introduce a data-driven approach for systematically detecting typical features and assessing their productivity over time, using information-theoretic measures of entropy and surprisal.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B1

Staudte, Maria

The influence of visual context on predictions in sentence processing: Evidence from ICA Inproceedings

Proceedings at the Language and Perception International Conference, Trondheim, Norwegen, 2016.

A word’s predictability or surprisal, as determined by cloze probabilities or language models (Frank, 2013) is related to processing effort, in that less expected words take more effort to process (Hale, 2001; Lau et al., 2013). A word’s surprisal, however, may also be influenced by the non-linguistic context, such as visual cues: In the visual world paradigm (VWP), anticipatory eye movements suggest that listeners exploit the scene to predict what will be mentioned next (Altmann and Kamide, 1999). How visual context affects surprisal and processing effort, however, remains unclear. Here, we present a series of four studies providing evidence on how visually-determined probabilistic expectations for a spoken target word, as indicated by anticipatory eye movements, predict graded processing effort for that word, as assessed by a pupillometric measure (the Index of Cognitive Activity, ICA). These findings are a clear and robust demonstration that the non-linguistic context can immediately influence both lexical expectations, and surprisal-based processing effort.

@inproceedings{Ankener2016,
title = {The influence of visual context on predictions in sentence processing: Evidence from ICA},
author = {Maria Staudte},
url = {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6302025/},
year = {2016},
date = {2016},
booktitle = {Proceedings at the Language and Perception International Conference},
address = {Trondheim, Norwegen},
abstract = {

A word’s predictability or surprisal, as determined by cloze probabilities or language models (Frank, 2013) is related to processing effort, in that less expected words take more effort to process (Hale, 2001; Lau et al., 2013). A word’s surprisal, however, may also be influenced by the non-linguistic context, such as visual cues: In the visual world paradigm (VWP), anticipatory eye movements suggest that listeners exploit the scene to predict what will be mentioned next (Altmann and Kamide, 1999). How visual context affects surprisal and processing effort, however, remains unclear. Here, we present a series of four studies providing evidence on how visually-determined probabilistic expectations for a spoken target word, as indicated by anticipatory eye movements, predict graded processing effort for that word, as assessed by a pupillometric measure (the Index of Cognitive Activity, ICA). These findings are a clear and robust demonstration that the non-linguistic context can immediately influence both lexical expectations, and surprisal-based processing effort.

},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A5

Staudte, Maria

Cost and Gains of Using Visual Context for Referent Prediction Inproceedings

Proceedings of the 9th Embodied and Situated Language Processing Conference (ESLP), Pucón, 2016.

@inproceedings{sekicki2016b,
title = {Cost and Gains of Using Visual Context for Referent Prediction},
author = {Maria Staudte},
year = {2016},
date = {2016-10-18},
booktitle = {Proceedings of the 9th Embodied and Situated Language Processing Conference (ESLP)},
address = {Pucón},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A5

Sekicki, Mirjana; Ankener, Christine; Staudte, Maria

Language Processing: Cognitive Load with(out) Visual Context Inproceedings

Proceedings at the 22nd Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP), Bilbao, Spain, 2016.

We investigated the effect of visual context on cognitive load (CL) that is induced by prediction forming during sentence processing, using a novel measure of CL: the Index of Cognitive Activity. We conducted two experiments, one including only linguistic stimuli (LING) and one with the additional visual context of four potential target objects (VIS). Noun predictability was modulated by verb constraint (ironable vs. describable objects) and thematic fit; and further by visual competitors (two ironable vs. four describable objects).
“The woman (1) irons / (2) describes soon the (a) t-shirt / (b) sock.“
We found lower CL on the noun in (1a) compared to (1b) in both studies, suggesting that after “iron“, “t-shirt“ was more predictable, and hence easier to process, than “sock“. More importantly, VIS findings show higher CL on “iron“ compared to “describe“, suggesting that visual context allowed for active exclusion of two non-ironable targets. Conversely, CL on nouns was lower when following “iron“ than “describe“, due to only one ironable competitor compared to three describable competitors. These findings suggest that the presence of visual context alters the distribution of CL during sentence processing. Future work includes gaze cues as additional information, potentially further affecting CL distribution.

@inproceedings{sekicki2016,
title = {Language Processing: Cognitive Load with(out) Visual Context},
author = {Mirjana Sekicki and Christine Ankener and Maria Staudte},
year = {2016},
date = {2016-10-18},
booktitle = {Proceedings at the 22nd Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP)},
address = {Bilbao, Spain},
abstract = {We investigated the effect of visual context on cognitive load (CL) that is induced by prediction forming during sentence processing, using a novel measure of CL: the Index of Cognitive Activity. We conducted two experiments, one including only linguistic stimuli (LING) and one with the additional visual context of four potential target objects (VIS). Noun predictability was modulated by verb constraint (ironable vs. describable objects) and thematic fit; and further by visual competitors (two ironable vs. four describable objects). ''The woman (1) irons / (2) describes soon the (a) t-shirt / (b) sock.'' We found lower CL on the noun in (1a) compared to (1b) in both studies, suggesting that after ''iron'', ''t-shirt'' was more predictable, and hence easier to process, than ''sock''. More importantly, VIS findings show higher CL on ''iron'' compared to ''describe'', suggesting that visual context allowed for active exclusion of two non-ironable targets. Conversely, CL on nouns was lower when following ''iron'' than ''describe'', due to only one ironable competitor compared to three describable competitors. These findings suggest that the presence of visual context alters the distribution of CL during sentence processing. Future work includes gaze cues as additional information, potentially further affecting CL distribution.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A5

Staudte, Maria

Low Predictability: An Empirical Comparison of Paradigms Used for Sentence Comprehension Inproceedings

29th Annual Conference on Human Sentence Processing (CUNY), Gainesville, FL, 2016.

Contexts that constrain upcoming words to some higher or lower extent can be composed differently but are typically all evaluated using cloze-probability (Rayner & Well, 1996). Less predicted words were found to correlate with more negative N400 (e.g., Frank et al., 2015; Kutas & Hillyard, 1984) and longer reading times (Rayner & Well, 1996; Smith & Levy, 2013). Recently, however, it has been suggested that predictability, as in cloze-probability, is only one influence on processing cost (e.g., DeLong et al., 2014). As DeLong et al. show, differences in plausibility of words with similar cloze-probability also affect processing of such words, reflected in different ERP components. This hints at a difference between frequency-based and deeper semantic processing. Moreover, a relatively novel measure, the Index of Cognitive Activity (ICA) capturing pupil jitter, has been linked to cognitive load and predictability (Demberg et al., 2013).

@inproceedings{CUNY2016_A5,
title = {Low Predictability: An Empirical Comparison of Paradigms Used for Sentence Comprehension},
author = {Maria Staudte},
url = {https://www.coli.uni-saarland.de/~mirjana/papers/CUNY2016.pdf},
year = {2016},
date = {2016},
booktitle = {29th Annual Conference on Human Sentence Processing (CUNY)},
address = {Gainesville, FL},
abstract = {Contexts that constrain upcoming words to some higher or lower extent can be composed differently but are typically all evaluated using cloze-probability (Rayner & Well, 1996). Less predicted words were found to correlate with more negative N400 (e.g., Frank et al., 2015; Kutas & Hillyard, 1984) and longer reading times (Rayner & Well, 1996; Smith & Levy, 2013). Recently, however, it has been suggested that predictability, as in cloze-probability, is only one influence on processing cost (e.g., DeLong et al., 2014). As DeLong et al. show, differences in plausibility of words with similar cloze-probability also affect processing of such words, reflected in different ERP components. This hints at a difference between frequency-based and deeper semantic processing. Moreover, a relatively novel measure, the Index of Cognitive Activity (ICA) capturing pupil jitter, has been linked to cognitive load and predictability (Demberg et al., 2013).},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A5

Zarcone, Alessandra; van Schijndel, Marten; Vogels, Jorrig; Demberg, Vera

Salience and attention in surprisal-based accounts of language processing Journal Article

Frontiers in Psychology, 7, 2016, ISSN 1664-1078.

The notion of salience has been singled out as the explanatory factor for a diverse range of linguistic phenomena. In particular, perceptual salience (e.g., visual salience of objects in the world, acoustic prominence of linguistic sounds) and semantic-pragmatic salience (e.g., prominence of recently mentioned or topical referents) have been shown to influence language comprehension and production. A different line of research has sought to account for behavioral correlates of cognitive load during comprehension as well as for certain patterns in language usage using information-theoretic notions, such as surprisal. Surprisal and salience both affect language processing at different levels, but the relationship between the two has not been adequately elucidated, and the question of whether salience can be reduced to surprisal / predictability is still open. Our review identifies two main challenges in addressing this question: terminological inconsistency and lack of integration between high and low levels of representations in salience-based accounts and surprisal-based accounts. We capitalize upon work in visual cognition in order to orient ourselves in surveying the different facets of the notion of salience in linguistics and their relation with models of surprisal. We find that work on salience highlights aspects of linguistic communication that models of surprisal tend to overlook, namely the role of attention and relevance to current goals, and we argue that the Predictive Coding framework provides a unified view which can account for the role played by attention and predictability at different levels of processing and which can clarify the interplay between low and high levels of processes and between predictability-driven expectation and attention-driven focus.

@article{Zarcone2016,
title = {Salience and attention in surprisal-based accounts of language processing},
author = {Alessandra Zarcone and Marten van Schijndel and Jorrig Vogels and Vera Demberg},
url = {http://www.frontiersin.org/language_sciences/10.3389/fpsyg.2016.00844/abstract},
doi = {https://doi.org/10.3389/fpsyg.2016.00844},
year = {2016},
date = {2016},
journal = {Frontiers in Psychology},
volume = {7},
number = {844},
abstract = {

The notion of salience has been singled out as the explanatory factor for a diverse range of linguistic phenomena. In particular, perceptual salience (e.g., visual salience of objects in the world, acoustic prominence of linguistic sounds) and semantic-pragmatic salience (e.g., prominence of recently mentioned or topical referents) have been shown to influence language comprehension and production. A different line of research has sought to account for behavioral correlates of cognitive load during comprehension as well as for certain patterns in language usage using information-theoretic notions, such as surprisal. Surprisal and salience both affect language processing at different levels, but the relationship between the two has not been adequately elucidated, and the question of whether salience can be reduced to surprisal / predictability is still open. Our review identifies two main challenges in addressing this question: terminological inconsistency and lack of integration between high and low levels of representations in salience-based accounts and surprisal-based accounts. We capitalize upon work in visual cognition in order to orient ourselves in surveying the different facets of the notion of salience in linguistics and their relation with models of surprisal. We find that work on salience highlights aspects of linguistic communication that models of surprisal tend to overlook, namely the role of attention and relevance to current goals, and we argue that the Predictive Coding framework provides a unified view which can account for the role played by attention and predictability at different levels of processing and which can clarify the interplay between low and high levels of processes and between predictability-driven expectation and attention-driven focus.
},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Projects:   A3 A4

Tilk, Ottokar; Demberg, Vera; Sayeed, Asad; Klakow, Dietrich; Thater, Stefan

Event participation modelling with neural networks Inproceedings

Conference on Empirical Methods in Natural Language Processing, Austin, Texas, USA, 2016.

A common problem in cognitive modelling is lack of access to accurate broad-coverage models of event-level surprisal. As shown in,
e.g., Bicknell et al. (2010), event-level knowledge does affect human expectations for verbal arguments. For example, the model should be able to predict that mechanics are likely to check tires, while journalists are more likely to check typos. Similarly, we would like to predict what locations are likely for playing football or playing flute in order to estimate the surprisal of actually-encountered locations. Furthermore, such a model can be used to provide a probability distribution over fillers for a thematic role which is not mentioned in the text at all.

To this end, we train two neural network models (an incremental one and a non-incremental one) on large amounts of automatically rolelabelled text. Our models are probabilistic and can handle several roles at once, which also enables them to learn interactions between different role fillers. Evaluation shows a drastic improvement over current state-of-the-art systems on modelling human thematic fit judgements, and we demonstrate via a sentence similarity task that the system learns highly useful embeddings.

@inproceedings{Tilk2016,
title = {Event participation modelling with neural networks},
author = {Ottokar Tilk and Vera Demberg and Asad Sayeed and Dietrich Klakow and Stefan Thater},
url = {https://www.semanticscholar.org/paper/Event-participant-modelling-with-neural-networks-Tilk-Demberg/d08d663d7795c76bb008f539b1ac7caf8a9ef26c},
year = {2016},
date = {2016},
publisher = {Conference on Empirical Methods in Natural Language Processing},
address = {Austin, Texas, USA},
abstract = {A common problem in cognitive modelling is lack of access to accurate broad-coverage models of event-level surprisal. As shown in, e.g., Bicknell et al. (2010), event-level knowledge does affect human expectations for verbal arguments. For example, the model should be able to predict that mechanics are likely to check tires, while journalists are more likely to check typos. Similarly, we would like to predict what locations are likely for playing football or playing flute in order to estimate the surprisal of actually-encountered locations. Furthermore, such a model can be used to provide a probability distribution over fillers for a thematic role which is not mentioned in the text at all. To this end, we train two neural network models (an incremental one and a non-incremental one) on large amounts of automatically rolelabelled text. Our models are probabilistic and can handle several roles at once, which also enables them to learn interactions between different role fillers. Evaluation shows a drastic improvement over current state-of-the-art systems on modelling human thematic fit judgements, and we demonstrate via a sentence similarity task that the system learns highly useful embeddings.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A3

Modi, Ashutosh; Titov, Ivan; Demberg, Vera; Sayeed, Asad; Pinkal, Manfred

Modeling Semantic Expectations: Using Script Knowledge for Referent Prediction Journal Article

Transactions of the Association for Computational Linguistics, MIT Press, pp. 31-44, Cambridge, MA, 2016.

Recent research in psycholinguistics has provided increasing evidence that humans predict upcoming content. Prediction also affects perception and might be a key to robustness in human language processing. In this paper, we investigate the factors that affect human prediction by building a computational model that can predict upcoming discourse referents based on linguistic knowledge alone vs. linguistic knowledge jointly with common-sense knowledge in the form of scripts. We find that script knowledge significantly improves model estimates of human predictions. In a second study, we test the highly controversial hypothesis that predictability influences referring expression type but do not find evidence for such an effect.

@article{ashutoshTacl2016,
title = {Modeling Semantic Expectations: Using Script Knowledge for Referent Prediction},
author = {Ashutosh Modi and Ivan Titov and Vera Demberg and Asad Sayeed and Manfred Pinkal},
url = {https://aclanthology.org/Q17-1003},
doi = {https://doi.org/10.1162/tacl_a_00044},
year = {2016},
date = {2016},
journal = {Transactions of the Association for Computational Linguistics},
pages = {31-44},
publisher = {MIT Press},
address = {Cambridge, MA},
abstract = {Recent research in psycholinguistics has provided increasing evidence that humans predict upcoming content. Prediction also affects perception and might be a key to robustness in human language processing. In this paper, we investigate the factors that affect human prediction by building a computational model that can predict upcoming discourse referents based on linguistic knowledge alone vs. linguistic knowledge jointly with common-sense knowledge in the form of scripts. We find that script knowledge significantly improves model estimates of human predictions. In a second study, we test the highly controversial hypothesis that predictability influences referring expression type but do not find evidence for such an effect.},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   A3

Event Embeddings for Semantic Script Modeling Inproceedings

Proceedings of the Conference on Computational Natural Language Learning (CoNLL), Berlin, Germany, 2016.

Semantic scripts is a conceptual representation which defines how events are organized into higher level activities. Practically all the previous approaches to inducing script knowledge from text relied on count-based techniques (e.g., generative models) and have not attempted to compositionally model events. In this work, we introduce a neural network model which relies on distributed compositional representations of events. The model captures statistical dependencies between events in a scenario, overcomes some of the shortcomings of previous approaches (e.g., by more effectively dealing with data sparsity) and outperforms count-based counterparts on the narrative cloze task.

@inproceedings{modi:CONLL2016,
title = {Event Embeddings for Semantic Script Modeling},
author = {Ashutosh Modi},
url = {https://www.researchgate.net/publication/306093411_Event_Embeddings_for_Semantic_Script_Modeling},
year = {2016},
date = {2016-10-17},
booktitle = {Proceedings of the Conference on Computational Natural Language Learning (CoNLL)},
address = {Berlin, Germany},
abstract = {Semantic scripts is a conceptual representation which defines how events are organized into higher level activities. Practically all the previous approaches to inducing script knowledge from text relied on count-based techniques (e.g., generative models) and have not attempted to compositionally model events. In this work, we introduce a neural network model which relies on distributed compositional representations of events. The model captures statistical dependencies between events in a scenario, overcomes some of the shortcomings of previous approaches (e.g., by more effectively dealing with data sparsity) and outperforms count-based counterparts on the narrative cloze task.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A3

Modi, Ashutosh; Anikina, Tatjana; Ostermann, Simon; Pinkal, Manfred

InScript: Narrative texts annotated with script information Inproceedings

Calzolari, Nicoletta; Choukri, Khalid; Declerck, Thierry; Grobelnik, Marko; Maegaard, Bente; Mariani, Joseph; Moreno, Asuncion; Odijk, Jan; Piperidis, Stelios (Ed.): Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), European Language Resources Association (ELRA), pp. 3485-3493, Portorož, Slovenia, 2016, ISBN 978-2-9517408-9-1.

This paper presents the InScript corpus (Narrative Texts Instantiating Script structure). InScript is a corpus of 1,000 stories centered around 10 different scenarios. Verbs and noun phrases are annotated with event and participant types, respectively. Additionally, the text is annotated with coreference information. The corpus shows rich lexical variation and will serve as a unique resource for the study of the role of script knowledge in natural language processing.

@inproceedings{MODI16.352,
title = {InScript: Narrative texts annotated with script information},
author = {Ashutosh Modi and Tatjana Anikina and Simon Ostermann and Manfred Pinkal},
editor = {Nicoletta Calzolari and Khalid Choukri and Thierry Declerck and Marko Grobelnik and Bente Maegaard and Joseph Mariani and Asuncion Moreno and Jan Odijk and Stelios Piperidis},
url = {https://aclanthology.org/L16-1555},
year = {2016},
date = {2016-10-17},
booktitle = {Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)},
isbn = {978-2-9517408-9-1},
pages = {3485-3493},
publisher = {European Language Resources Association (ELRA)},
address = {Portoro{\v{z}, Slovenia},
abstract = {This paper presents the InScript corpus (Narrative Texts Instantiating Script structure). InScript is a corpus of 1,000 stories centered around 10 different scenarios. Verbs and noun phrases are annotated with event and participant types, respectively. Additionally, the text is annotated with coreference information. The corpus shows rich lexical variation and will serve as a unique resource for the study of the role of script knowledge in natural language processing.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A3

Venhuizen, Noortje; Brouwer, Harm; Crocker, Matthew W.

When the food arrives before the menu: Modeling event-driven surprisal in language comprehension Inproceedings

29th CUNY conference on Human Sentence Processing, Events in Language and Cognition workshops, University of Florida, 2016.
We present a neurocomputational—recurrent artificial neural network—model of language processing that integrates linguistic knowledge and world/event knowledge, and that produces word surprisal estimates that take into account both. Our model constructs a cognitively motivated situation model of the state-of-the-affairs as described by a sentence. Critically, these situation model representations inherently encode world/event knowledge. We show that the surprisal estimates that our model produces reflect both linguistic surprisal as well as surprisal that is driven by knowledge about structured events. We outline how we can employ the model to explore the interaction between these types of knowledge in online language processing.

@inproceedings{Venhuizen2016,
title = {When the food arrives before the menu: Modeling event-driven surprisal in language comprehension},
author = {Noortje Venhuizen and Harm Brouwer and Matthew W. Crocker},
url = {https://www.researchgate.net/publication/321621784_When_the_food_arrives_before_the_menu_Modeling_event-driven_surprisal_in_language_comprehension},
year = {2016},
date = {2016},
booktitle = {29th CUNY conference on Human Sentence Processing},
publisher = {Events in Language and Cognition workshops},
address = {University of Florida},
abstract = {

We present a neurocomputational—recurrent artificial neural network—model of language processing that integrates linguistic knowledge and world/event knowledge, and that produces word surprisal estimates that take into account both. Our model constructs a cognitively motivated situation model of the state-of-the-affairs as described by a sentence. Critically, these situation model representations inherently encode world/event knowledge. We show that the surprisal estimates that our model produces reflect both linguistic surprisal as well as surprisal that is driven by knowledge about structured events. We outline how we can employ the model to explore the interaction between these types of knowledge in online language processing.
},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A1

Rabs, Elisabeth; Drenhaus, Heiner; Delogu, Francesca; Crocker, Matthew W.

Reading between the lines: The influence of script knowledge on on-line comprehension Inproceedings

29th CUNY conference on Human Sentence Processing, Events in Language and Cognition workshops, University of Florida, 2016.
While the influence of linguistic context on language processing has been extensively studied, less is known about the mental representation, structure and use of so-called script knowledge. Scripts are defined as a person’s knowledge about temporally and causally ordered sequences of events. They are often activated by linguistic context, but otherwise left implicit. In two ERP studies we examine how such non-linguistic event knowledge influences predictive language processing beyond what linguistic prediction or lexical priming alone can explain. Specifically, we find evidence for a decrease in N400 amplitude – known to reflect a word’s unexpectedness – for target nouns consistent with events that are expected according to script knowledge. Experiment 1 focuses on differentiating the relative contribution of lexical priming and script knowledge. Assuming the temporal structure of scripts is accessible and used for prediction, but does not alter any influence of priming, we inserted temporal shifts affecting the plausibility of the critical object. Results from Exp. 1 suggest that, even after a large temporal shift, a script-fitting object noun is still easier to process than a neutral one. One reason for this may be that the temporal shift used in Exp. 1 was not salient enough to completely deactivate a script. Experiment 2, for which data is currently being collected, explores how script knowledge is used when context provides two scripts. One script is active, and thus expected to influence processing of target nouns to a greater extent. By demonstrating that minimal linguistic material is sufficient to rapidly activate detailed script knowledge and make it accessible for language processing, we conclude that scripts provide an interesting method to investigate the interaction of non-linguistic knowledge in on-line comprehension. Specifically, drawing on aspects of their temporal and hierarchical structure we hope to further explore the role of implicit causal, temporal, and spatial relations in language comprehension.

@inproceedings{Rabs2016,
title = {Reading between the lines: The influence of script knowledge on on-line comprehension},
author = {Elisabeth Rabs and Heiner Drenhaus and Francesca Delogu and Matthew W. Crocker},
url = {https://www.researchgate.net/publication/320988696_Reading_Between_the_Lines_The_Influence_of_Script_Knowledge_on_On-Line_Comprehension},
year = {2016},
date = {2016},
booktitle = {29th CUNY conference on Human Sentence Processing},
publisher = {Events in Language and Cognition workshops},
address = {University of Florida},
abstract = {

While the influence of linguistic context on language processing has been extensively studied, less is known about the mental representation, structure and use of so-called script knowledge. Scripts are defined as a person’s knowledge about temporally and causally ordered sequences of events. They are often activated by linguistic context, but otherwise left implicit. In two ERP studies we examine how such non-linguistic event knowledge influences predictive language processing beyond what linguistic prediction or lexical priming alone can explain. Specifically, we find evidence for a decrease in N400 amplitude - known to reflect a word’s unexpectedness - for target nouns consistent with events that are expected according to script knowledge. Experiment 1 focuses on differentiating the relative contribution of lexical priming and script knowledge. Assuming the temporal structure of scripts is accessible and used for prediction, but does not alter any influence of priming, we inserted temporal shifts affecting the plausibility of the critical object. Results from Exp. 1 suggest that, even after a large temporal shift, a script-fitting object noun is still easier to process than a neutral one. One reason for this may be that the temporal shift used in Exp. 1 was not salient enough to completely deactivate a script. Experiment 2, for which data is currently being collected, explores how script knowledge is used when context provides two scripts. One script is active, and thus expected to influence processing of target nouns to a greater extent. By demonstrating that minimal linguistic material is sufficient to rapidly activate detailed script knowledge and make it accessible for language processing, we conclude that scripts provide an interesting method to investigate the interaction of non-linguistic knowledge in on-line comprehension. Specifically, drawing on aspects of their temporal and hierarchical structure we hope to further explore the role of implicit causal, temporal, and spatial relations in language comprehension.
},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A1

Le Maguer, Sébastien; Steiner, Ingmar

The MaryTTS entry for the Blizzard Challenge 2016 Inproceedings

Blizzard Challenge, Cupertino, CA, USA, 2016.

The MaryTTS system is a modular architecture text-to-speech (TTS) system whose development started around 15 years ago. This paper presents the MaryTTS entry for the Blizzard Challenge 2016. For this entry, we used the default configuration of MaryTTS based on the unit selection paradigm.

However, the architecture is currently undergoing a massive refactoring process in order to provide a more fully modular system. This will allow researchers to focus only on some part of the synthesis process. The current participation objective includes assessing the current baseline quality in order to evaluate any future improvements. These can be achieved more easily thanks to a more flexible and robust architecture. The results obtained in this challenge prove that our system is not obsolete, but improvements need to be made to maintain it in the state of the art in the future.

@inproceedings{LeMaguer2016BC,
title = {The MaryTTS entry for the Blizzard Challenge 2016},
author = {S{\'e}bastien Le Maguer and Ingmar Steiner},
url = {https://www.semanticscholar.org/paper/The-MaryTTS-entry-for-the-Blizzard-Challenge-2016-Maguer-Steiner/62e04ad78ba1a531e419bea25cb9eb8799aaf07e},
year = {2016},
date = {2016-09-16},
booktitle = {Blizzard Challenge},
address = {Cupertino, CA, USA},
abstract = {The MaryTTS system is a modular architecture text-to-speech (TTS) system whose development started around 15 years ago. This paper presents the MaryTTS entry for the Blizzard Challenge 2016. For this entry, we used the default configuration of MaryTTS based on the unit selection paradigm. However, the architecture is currently undergoing a massive refactoring process in order to provide a more fully modular system. This will allow researchers to focus only on some part of the synthesis process. The current participation objective includes assessing the current baseline quality in order to evaluate any future improvements. These can be achieved more easily thanks to a more flexible and robust architecture. The results obtained in this challenge prove that our system is not obsolete, but improvements need to be made to maintain it in the state of the art in the future.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   C5

Oualil, Youssef; Singh, Mittul; Greenberg, Clayton; Klakow, Dietrich

Long-short range context neural networks for language models Inproceedings

EMLP 2016, 2016.

The goal of language modeling techniques is to capture the statistical and structural properties of natural languages from training corpora. This task typically involves the learning of short range dependencies, which generally model the syntactic properties of a language and/or long range dependencies, which are semantic in nature. We propose in this paper a new multi-span architecture, which separately models the short and long context information while it dynamically merges them to perform the language modeling task. This is done through a novel recurrent Long-Short Range Context (LSRC) network, which explicitly models the local (short) and global (long) context using two separate hidden states that evolve in time. This new architecture is an adaptation of the Long-Short Term Memory network (LSTM) to take into account the linguistic properties. Extensive experiments conducted on the Penn Treebank (PTB) and the Large Text Compression Benchmark (LTCB) corpus showed a significant reduction of the perplexity when compared to state-of-the-art language modeling techniques.

@inproceedings{Oualil2016,
title = {Long-short range context neural networks for language models},
author = {Youssef Oualil and Mittul Singh and Clayton Greenberg and Dietrich Klakow},
url = {https://aclanthology.org/D16-1154/},
year = {2016},
date = {2016},
publisher = {EMLP 2016},
abstract = {The goal of language modeling techniques is to capture the statistical and structural properties of natural languages from training corpora. This task typically involves the learning of short range dependencies, which generally model the syntactic properties of a language and/or long range dependencies, which are semantic in nature. We propose in this paper a new multi-span architecture, which separately models the short and long context information while it dynamically merges them to perform the language modeling task. This is done through a novel recurrent Long-Short Range Context (LSRC) network, which explicitly models the local (short) and global (long) context using two separate hidden states that evolve in time. This new architecture is an adaptation of the Long-Short Term Memory network (LSTM) to take into account the linguistic properties. Extensive experiments conducted on the Penn Treebank (PTB) and the Large Text Compression Benchmark (LTCB) corpus showed a significant reduction of the perplexity when compared to state-of-the-art language modeling techniques.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B4

Schneegass, Stefan; Oualil, Youssef; Bulling, Andreas

SkullConduct: Biometric User Identification on Eyewear Computers Using Bone Conduction Through the Skull Inproceedings

Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI '16, ACM, pp. 1379-1384, New York, NY, USA, 2016, ISBN 978-1-4503-3362-7.

Secure user identification is important for the increasing number of eyewear computers but limited input capabilities pose significant usability challenges for established knowledge-based schemes, such as passwords or PINs. We present SkullConduct, a biometric system that uses bone conduction of sound through the user’s skull as well as a microphone readily integrated into many of these devices, such as Google Glass. At the core of SkullConduct is a method to analyze the characteristic frequency response created by the user’s skull using a combination of Mel Frequency Cepstral Coefficient (MFCC) features as well as a computationally light-weight 1NN classifier. We report on a controlled experiment with 10 participants that shows that this frequency response is person-specific and stable — even when taking off and putting on the device multiple times — and thus serves as a robust biometric. We show that our method can identify users with 97.0% accuracy and authenticate them with an equal error rate of 6.9%, thereby bringing biometric user identification to eyewear computers equipped with bone conduction technology.

@inproceedings{Schneegass:2016:SBU:2858036.2858152,
title = {SkullConduct: Biometric User Identification on Eyewear Computers Using Bone Conduction Through the Skull},
author = {Stefan Schneegass and Youssef Oualil and Andreas Bulling},
url = {http://doi.acm.org/10.1145/2858036.2858152},
doi = {https://doi.org/10.1145/2858036.2858152},
year = {2016},
date = {2016},
booktitle = {Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems},
isbn = {978-1-4503-3362-7},
pages = {1379-1384},
publisher = {ACM},
address = {New York, NY, USA},
abstract = {Secure user identification is important for the increasing number of eyewear computers but limited input capabilities pose significant usability challenges for established knowledge-based schemes, such as passwords or PINs. We present SkullConduct, a biometric system that uses bone conduction of sound through the user's skull as well as a microphone readily integrated into many of these devices, such as Google Glass. At the core of SkullConduct is a method to analyze the characteristic frequency response created by the user's skull using a combination of Mel Frequency Cepstral Coefficient (MFCC) features as well as a computationally light-weight 1NN classifier. We report on a controlled experiment with 10 participants that shows that this frequency response is person-specific and stable -- even when taking off and putting on the device multiple times -- and thus serves as a robust biometric. We show that our method can identify users with 97.0% accuracy and authenticate them with an equal error rate of 6.9%, thereby bringing biometric user identification to eyewear computers equipped with bone conduction technology.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B4

Findings of the 2016 Conference on Machine Translation Inproceedings

Proceedings of the First Conference on Machine Translation, Association for Computational Linguistics, pp. 131-198, Berlin, Germany, 2016.

This paper presents the results of the WMT16 shared tasks, which included five machine translation (MT) tasks (standard news, IT-domain, biomedical, multimodal, pronoun), three evaluation tasks (metrics, tuning, run-time estimation of MT quality), and an automatic post-editing task and bilingual document alignment task. This year, 102 MT systems from 24 institutions (plus 36 anonymized online systems) were submitted to the 12 translation directions in the news translation task. The IT-domain task received 31 submissions from 12 institutions in 7 directions and the Biomedical task received 15 submissions systems from 5 institutions. Evaluation was both automatic and manual (relative ranking and 100-point scale assessments). The quality estimation task had three subtasks, with a total of 14 teams, submitting 39 entries. The automatic post-editing task had a total of 6 teams, submitting 11 entries.

@inproceedings{bojar-EtAl:2016:WMT1,
title = {Findings of the 2016 Conference on Machine Translation},
author = {Ondvrej Bojar and Rajen Chatterjee and Christian Federmann and Yvette Graham and Barry Haddow and Matthias Huck and Antonio Jimeno Yepes and Philipp Koehn and Varvara Logacheva and Christof Monz and Matteo Negri and Aurelie Neveol and Mariana Neves and Martin Popel and Matt Post and Raphael Rubino and Carolina Scarton and Lucia Specia and Marco Turchi and Karin Verspoor and Marcos Zampieri},
url = {http://www.aclweb.org/anthology/W/W16/W16-2301},
year = {2016},
date = {2016-08-01},
booktitle = {Proceedings of the First Conference on Machine Translation},
pages = {131-198},
publisher = {Association for Computational Linguistics},
address = {Berlin, Germany},
abstract = {This paper presents the results of the WMT16 shared tasks, which included five machine translation (MT) tasks (standard news, IT-domain, biomedical, multimodal, pronoun), three evaluation tasks (metrics, tuning, run-time estimation of MT quality), and an automatic post-editing task and bilingual document alignment task. This year, 102 MT systems from 24 institutions (plus 36 anonymized online systems) were submitted to the 12 translation directions in the news translation task. The IT-domain task received 31 submissions from 12 institutions in 7 directions and the Biomedical task received 15 submissions systems from 5 institutions. Evaluation was both automatic and manual (relative ranking and 100-point scale assessments). The quality estimation task had three subtasks, with a total of 14 teams, submitting 39 entries. The automatic post-editing task had a total of 6 teams, submitting 11 entries.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B6

Varjokallio, Matti; Klakow, Dietrich

Unsupervised morph segmentation and statistical language models for vocabulary expansion Inproceedings

Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Association for Computational Linguistics, pp. 175-180, Berlin, Germany, 2016.

This work explores the use of unsupervised morph segmentation along with statistical language models for the task of vocabulary expansion. Unsupervised vocabulary expansion has large potential for improving vocabulary coverage and performance in different natural language processing tasks, especially in lessresourced settings on morphologically rich languages. We propose a combination of unsupervised morph segmentation and statistical language models and evaluate on languages from the Babel corpus. The method is shown to perform well for all the evaluated languages when compared to the previous work on the task.

@inproceedings{varjokallio-klakow:2016:P16-2,
title = {Unsupervised morph segmentation and statistical language models for vocabulary expansion},
author = {Matti Varjokallio and Dietrich Klakow},
url = {http://anthology.aclweb.org/P16-2029},
year = {2016},
date = {2016-08-01},
booktitle = {Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)},
pages = {175-180},
publisher = {Association for Computational Linguistics},
address = {Berlin, Germany},
abstract = {This work explores the use of unsupervised morph segmentation along with statistical language models for the task of vocabulary expansion. Unsupervised vocabulary expansion has large potential for improving vocabulary coverage and performance in different natural language processing tasks, especially in lessresourced settings on morphologically rich languages. We propose a combination of unsupervised morph segmentation and statistical language models and evaluate on languages from the Babel corpus. The method is shown to perform well for all the evaluated languages when compared to the previous work on the task.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B4

Sayeed, Asad; Hong, Xudong; Demberg, Vera

Roleo: Visualising Thematic Fit Spaces on the Web Inproceedings

Proceedings of ACL-2016 System Demonstrations, Association for Computational Linguistics, pp. 139-144, Berlin, Germany, 2016.

In this paper, we present Roleo, a web tool for visualizing the vector spaces generated by the evaluation of distributional memory (DM) models over thematic fit judgements. A thematic fit judgement is a rating of the selectional preference of a verb for an argument that fills a given thematic role. The DM approach to thematic fit judgements involves the construction of a sub-space in which a prototypical role-filler can be built for comparison to the noun being judged. We describe a publicly-accessible web tool that allows for querying and exploring these spaces as well as a technique for visualizing thematic fit sub-spaces efficiently for web use.

@inproceedings{sayeed-hong-demberg:2016:P16-4,
title = {Roleo: Visualising Thematic Fit Spaces on the Web},
author = {Asad Sayeed and Xudong Hong and Vera Demberg},
url = {https://www.researchgate.net/publication/306093691_Roleo_Visualising_Thematic_Fit_Spaces_on_the_Web},
year = {2016},
date = {2016-08-01},
booktitle = {Proceedings of ACL-2016 System Demonstrations},
pages = {139-144},
publisher = {Association for Computational Linguistics},
address = {Berlin, Germany},
abstract = {In this paper, we present Roleo, a web tool for visualizing the vector spaces generated by the evaluation of distributional memory (DM) models over thematic fit judgements. A thematic fit judgement is a rating of the selectional preference of a verb for an argument that fills a given thematic role. The DM approach to thematic fit judgements involves the construction of a sub-space in which a prototypical role-filler can be built for comparison to the noun being judged. We describe a publicly-accessible web tool that allows for querying and exploring these spaces as well as a technique for visualizing thematic fit sub-spaces efficiently for web use.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B2

Successfully