Publications

Kravtchenko, Ekaterina; Demberg, Vera

Underinformative Event Mentions Trigger Context-Dependent Implicatures Inproceedings

Talk presented at Formal and Experimental Pragmatics: Methodological Issues of a Nascent Liaison (MXPRAG), Zentrum für Allgemeine Sprachwissenschaft (ZAS), Berlin, June 2015, 2015.

@inproceedings{Kravtchenko2015b,
title = {Underinformative Event Mentions Trigger Context-Dependent Implicatures},
author = {Ekaterina Kravtchenko and Vera Demberg},
year = {2015},
date = {2015-10-17},
booktitle = {Talk presented at Formal and Experimental Pragmatics: Methodological Issues of a Nascent Liaison (MXPRAG)},
publisher = {Zentrum f{\"u}r Allgemeine Sprachwissenschaft (ZAS), Berlin, June 2015},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A4

Zarcone, Alessandra; Padó, Sebastian; Lenci, Alessandro

Same Same but Different: Type and Typicality in a Distributional Model of Complement Coercion Inproceedings

Word Structure and Word Usage. Proceedings of the NetWordS Final Conference. Pisa, March 30-April 1, 2015, pp. 91-94, Pisa, Italy, 2015.
We aim to model the results from a selfpaced reading experiment, which tested the effect of semantic type clash and typicality on the processing of German complement coercion. We present two distributional semantic models to test if they can model the effect of both type and typicality in the psycholinguistic study. We show that one of the models, without explicitly representing type information, can account both for the effect of type and typicality in complement coercion.

@inproceedings{zarcone2015same,
title = {Same Same but Different: Type and Typicality in a Distributional Model of Complement Coercion},
author = {Alessandra Zarcone and Sebastian Padó and Alessandro Lenci},
url = {https://www.researchgate.net/publication/282740292_Same_same_but_different_Type_and_typicality_in_a_distributional_model_of_complement_coercion},
year = {2015},
date = {2015},
booktitle = {Word Structure and Word Usage. Proceedings of the NetWordS Final Conference. Pisa, March 30-April 1, 2015},
pages = {91-94},
address = {Pisa, Italy},
abstract = {

We aim to model the results from a selfpaced reading experiment, which tested the effect of semantic type clash and typicality on the processing of German complement coercion. We present two distributional semantic models to test if they can model the effect of both type and typicality in the psycholinguistic study. We show that one of the models, without explicitly representing type information, can account both for the effect of type and typicality in complement coercion.
},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Projects:   A2 A3

Demberg, Vera; Hoffmann, Jörg; Howcroft, David M.; Klakow, Dietrich; Torralba, Álvaro

Search Challenges in Natural Language Generation with Complex Optimization Objectives Journal Article

KI - Künstliche Intelligenz, Special Issue on Companion Technologies, Springer Berlin Heidelberg, 2015, ISSN 1610-1987.

Automatic natural language generation (NLG) is a difficult problem already when merely trying to come up with natural-sounding utterances. Ubiquituous applications, in particular companion technologies, pose the additional challenge of flexible adaptation to a user or a situation. This requires optimizing complex objectives such as information density, in combinatorial search spaces described using declarative input languages. We believe that AI search and planning is a natural match for these problems, and could substantially contribute to solving them effectively. We illustrate this using a concrete example NLG framework, give a summary of the relevant optimization objectives, and provide an initial list of research challenges.

@article{demberg:hoffmann:ki-2015,
title = {Search Challenges in Natural Language Generation with Complex Optimization Objectives},
author = {Vera Demberg and J{\"o}rg Hoffmann and David M. Howcroft and Dietrich Klakow and {\'A}lvaro Torralba},
url = {https://link.springer.com/article/10.1007/s13218-015-0409-5},
year = {2015},
date = {2015},
journal = {KI - K{\"u}nstliche Intelligenz, Special Issue on Companion Technologies},
publisher = {Springer Berlin Heidelberg},
abstract = {

Automatic natural language generation (NLG) is a difficult problem already when merely trying to come up with natural-sounding utterances. Ubiquituous applications, in particular companion technologies, pose the additional challenge of flexible adaptation to a user or a situation. This requires optimizing complex objectives such as information density, in combinatorial search spaces described using declarative input languages. We believe that AI search and planning is a natural match for these problems, and could substantially contribute to solving them effectively. We illustrate this using a concrete example NLG framework, give a summary of the relevant optimization objectives, and provide an initial list of research challenges.
},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   A3

Kravtchenko, Ekaterina; Demberg, Vera

Semantically Underinformative Utterances Trigger Pragmatic Inferences Proceeding

Annual Conference of the Cognitive Science Society (CogSci), Mind, Technology, and Society Pasadena Convention Center, 2015.
Most theories of pragmatics and language processing predict that speakers avoid informationally redundant utterances. From a processing standpoint, it remains unclear what happens when listeners encounter such utterances, and how they interpret them. We argue that uninformative utterances can trigger pragmatic inferences, which increase utterance utility in line with listener expectations. In this study, we look at utterances that refer to stereotyped event sequences describing common activities (scripts). Literature on processing of event sequences shows that people automatically infer component actions, once a script is ‘invoked.’ We demonstrate that when comprehenders encounter utterances describing events that can be easily inferred from prior context, they interpret them as signifying that the event conveys new, unstated information. We also suggest that formal models of language comprehension would have difficulty in accurately estimating the predictability or potential processing cost incurred by such utterances.

@proceeding{kravtchenko:demberg,
title = {Semantically Underinformative Utterances Trigger Pragmatic Inferences},
author = {Ekaterina Kravtchenko and Vera Demberg},
url = {https://www.semanticscholar.org/paper/Semantically-underinformative-utterances-trigger-Kravtchenko-Demberg/33256a5fca918eef5de8998db5a695d9bced5975},
year = {2015},
date = {2015-10-17},
booktitle = {Annual Conference of the Cognitive Science Society (CogSci)},
address = {Mind, Technology, and Society Pasadena Convention Center},
abstract = {

Most theories of pragmatics and language processing predict that speakers avoid informationally redundant utterances. From a processing standpoint, it remains unclear what happens when listeners encounter such utterances, and how they interpret them. We argue that uninformative utterances can trigger pragmatic inferences, which increase utterance utility in line with listener expectations. In this study, we look at utterances that refer to stereotyped event sequences describing common activities (scripts). Literature on processing of event sequences shows that people automatically infer component actions, once a script is ‘invoked.’ We demonstrate that when comprehenders encounter utterances describing events that can be easily inferred from prior context, they interpret them as signifying that the event conveys new, unstated information. We also suggest that formal models of language comprehension would have difficulty in accurately estimating the predictability or potential processing cost incurred by such utterances.
},
pubstate = {published},
type = {proceeding}
}

Copy BibTeX to Clipboard

Project:   A3

Batiukova, Olga; Bertinetto, Pier Marco; Lenci, Alessandro; Zarcone, Alessandra

Identifying Actional Features Through Semantic Priming: Cross-Romance Comparison Incollection

Taming the TAME systems. Cahiers Chronos 27, Rodopi, pp. 161-187, Amsterdam/Philadelphia, 2015.
This paper reports four semantic priming experiments in Italian and Spanish, whose goal was to verify the psychological reality of two aspectual features, resultativity and durativity. In the durativity task, the participants were asked whether the verb referred to a durable situation, in the resultativity task if it denoted a situation with a clear outcome. The results prove that both features are involved in online processing of the verb meaning: achievements ([+resultative, -durative]) and activities ([-resultative, +durative]) were processed faster in certain priming contexts. The priming patterns in the Romance languages present some striking similarities (only achievements were primed in the resultativity task) alongside some intriguing differences, and interestingly contrast with the behaviour of another language tested, Russian, whose aspectual system differs in significant ways.

@incollection{batiukova2015identifying,
title = {Identifying Actional Features Through Semantic Priming: Cross-Romance Comparison},
author = {Olga Batiukova and Pier Marco Bertinetto and Alessandro Lenci and Alessandra Zarcone},
url = {https://brill.com/display/book/edcoll/9789004292772/B9789004292772-s010.xml},
year = {2015},
date = {2015},
booktitle = {Taming the TAME systems. Cahiers Chronos 27},
pages = {161-187},
publisher = {Rodopi},
address = {Amsterdam/Philadelphia},
abstract = {

This paper reports four semantic priming experiments in Italian and Spanish, whose goal was to verify the psychological reality of two aspectual features, resultativity and durativity. In the durativity task, the participants were asked whether the verb referred to a durable situation, in the resultativity task if it denoted a situation with a clear outcome. The results prove that both features are involved in online processing of the verb meaning: achievements ([+resultative, -durative]) and activities ([-resultative, +durative]) were processed faster in certain priming contexts. The priming patterns in the Romance languages present some striking similarities (only achievements were primed in the resultativity task) alongside some intriguing differences, and interestingly contrast with the behaviour of another language tested, Russian, whose aspectual system differs in significant ways.
},
pubstate = {published},
type = {incollection}
}

Copy BibTeX to Clipboard

Projects:   A2 A3

Rudinger, Rachel; Demberg, Vera; Modi, Ashutosh; Van Durme, Benjamin; Pinkal, Manfred

Learning to Predict Script Events from Domain-Specific Text Journal Article

Lexical and Computational Semantics (* SEM 2015), pp. 205-210, 2015.

The automatic induction of scripts (Schank and Abelson, 1977) has been the focus of many recent works. In this paper, we employ a variety of these methods to learn Schank and Abelson’s canonical restaurant script, using a novel dataset of restaurant narratives we have compiled from a website called “Dinners from Hell.” Our models learn narrative chains, script-like structures that we evaluate with the “narrative cloze” task (Chambers and Jurafsky, 2008).

@article{rudinger2015learning,
title = {Learning to Predict Script Events from Domain-Specific Text},
author = {Rachel Rudinger and Vera Demberg and Ashutosh Modi and Benjamin Van Durme and Manfred Pinkal},
url = {http://www.aclweb.org/anthology/S15-1024},
year = {2015},
date = {2015},
journal = {Lexical and Computational Semantics (* SEM 2015)},
pages = {205-210},
abstract = {The automatic induction of scripts (Schank and Abelson, 1977) has been the focus of many recent works. In this paper, we employ a variety of these methods to learn Schank and Abelson’s canonical restaurant script, using a novel dataset of restaurant narratives we have compiled from a website called “Dinners from Hell.” Our models learn narrative chains, script-like structures that we evaluate with the “narrative cloze” task (Chambers and Jurafsky, 2008).},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   A3

White, Michael; Howcroft, David M.

Inducing Clause-Combining Rules: A Case Study with the SPaRKy Restaurant Corpus Inproceedings

Proc. of the 15th European Workshop on Natural Language Generation, Association for Computational Linguistics, Brighton, England, UK, 2015.

We describe an algorithm for inducing clause-combining rules for use in a traditional natural language generation architecture. An experiment pairing lexicalized text plans from the SPaRKy Restaurant Corpus with logical forms obtained by parsing the corresponding sentences demonstrates that the approach is able to learn clause-combining operations which have essentially the same coverage as those used in the SPaRKy Restaurant Corpus. This paper fills a gap in the literature, showing that it is possible to learn microplanning rules for both aggregation and discourse connective insertion, an important step towards ameliorating the knowledge acquisition bottleneck for NLG systems that produce texts with rich discourse structures using traditional architectures.

@inproceedings{white:howcroft:enlg-2015,
title = {Inducing Clause-Combining Rules: A Case Study with the SPaRKy Restaurant Corpus},
author = {Michael White and David M. Howcroft},
url = {http://www.aclweb.org/anthology/W15-4704},
year = {2015},
date = {2015},
booktitle = {Proc. of the 15th European Workshop on Natural Language Generation},
publisher = {Association for Computational Linguistics},
address = {Brighton, England, UK},
abstract = {We describe an algorithm for inducing clause-combining rules for use in a traditional natural language generation architecture. An experiment pairing lexicalized text plans from the SPaRKy Restaurant Corpus with logical forms obtained by parsing the corresponding sentences demonstrates that the approach is able to learn clause-combining operations which have essentially the same coverage as those used in the SPaRKy Restaurant Corpus. This paper fills a gap in the literature, showing that it is possible to learn microplanning rules for both aggregation and discourse connective insertion, an important step towards ameliorating the knowledge acquisition bottleneck for NLG systems that produce texts with rich discourse structures using traditional architectures.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A3

Tourtouri, Elli; Delogu, Francesca; Crocker, Matthew W.

ERP indices of referential informativity in visual contexts Inproceedings

Paper presented at the 28th CUNY Conference on Human Sentence Processing, University of South California, Los Angeles, USA, 2015.
Violations of the Maxims of Quantity occur when utterances provide more (over- specified) or less (under-specified) information than strictly required for referent identification. While behavioural data suggest that under-specified (US) expressions lead to comprehension difficulty and communicative failure, there is no consensus as to whether over- specified (OS) expressions are also detrimental to comprehension. In this study we shed light on this debate, providing neurophysiological evidence supporting the view that extra information facilitates comprehension. We further present novel evidence that referential failure due to underspecification is qualitatively different from explicit cases of referential failure, when no matching referential candidate is available in the context.

@inproceedings{Tourtourietal2015a,
title = {ERP indices of referential informativity in visual contexts},
author = {Elli Tourtouri and Francesca Delogu and Matthew W. Crocker},
url = {https://www.researchgate.net/publication/322570166_ERP_indices_of_referential_informativity_in_visual_contexts},
year = {2015},
date = {2015},
booktitle = {Paper presented at the 28th CUNY Conference on Human Sentence Processing},
publisher = {University of South California},
address = {Los Angeles, USA},
abstract = {

Violations of the Maxims of Quantity occur when utterances provide more (over- specified) or less (under-specified) information than strictly required for referent identification. While behavioural data suggest that under-specified (US) expressions lead to comprehension difficulty and communicative failure, there is no consensus as to whether over- specified (OS) expressions are also detrimental to comprehension. In this study we shed light on this debate, providing neurophysiological evidence supporting the view that extra information facilitates comprehension. We further present novel evidence that referential failure due to underspecification is qualitatively different from explicit cases of referential failure, when no matching referential candidate is available in the context.
},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Projects:   A1 C3

Degaetano-Ortlieb, Stefania; Kermes, Hannah; Khamis, Ashraf; Ordan, Noam; Teich, Elke

The taming of the data: Using text mining in building a corpus for diachronic analysis Inproceedings

Varieng - From Data to Evidence (d2e), University of Helsinki, 2015.

Social and historical linguistic studies benefit from corpora encoding contextual metadata (e.g. time, register, genre) and relevant structural information (e.g. document structure). While small, handcrafted corpora control over selected contextual variables (e.g. the Brown/LOB corpora encoding variety, register, and time) and are readily usable for analysis, big data (e.g. Google or Microsoft n-grams) are typically poorly contextualized and considered of limited value for linguistic analysis (see, however, Lieberman et al. 2007). Similarly, when we compile new corpora, sources may not contain all relevant metadata and structural data (e.g. the Old Bailey sources vs. the richly annotated corpus in Huber 2007).

@inproceedings{Degaetano-etal2015,
title = {The taming of the data: Using text mining in building a corpus for diachronic analysis},
author = {Stefania Degaetano-Ortlieb and Hannah Kermes and Ashraf Khamis and Noam Ordan and Elke Teich},
url = {https://www.ashrafkhamis.com/d2e2015.pdf},
year = {2015},
date = {2015-10-01},
booktitle = {Varieng - From Data to Evidence (d2e)},
address = {University of Helsinki},
abstract = {Social and historical linguistic studies benefit from corpora encoding contextual metadata (e.g. time, register, genre) and relevant structural information (e.g. document structure). While small, handcrafted corpora control over selected contextual variables (e.g. the Brown/LOB corpora encoding variety, register, and time) and are readily usable for analysis, big data (e.g. Google or Microsoft n-grams) are typically poorly contextualized and considered of limited value for linguistic analysis (see, however, Lieberman et al. 2007). Similarly, when we compile new corpora, sources may not contain all relevant metadata and structural data (e.g. the Old Bailey sources vs. the richly annotated corpus in Huber 2007).},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B1

Torabi Asr, Fatemeh; Demberg, Vera

Uniform Information Density at the Level of Discourse Relations: Negation Markers and Discourse Connective Omission Inproceedings

IWCS 2015, pp. 118, 2015.

About half of the discourse relations annotated in Penn Discourse Treebank (Prasad et al., 2008) are not explicitly marked using a discourse connective. But we do not have extensive theories of when or why a discourse relation is marked explicitly or when the connective is omitted. Asr and Demberg (2012a) have suggested an information-theoretic perspective according to which discourse connectives are more likely to be omitted when they are marking a relation that is expected or predictable. This account is based on the Uniform Information Density theory (Levy and Jaeger, 2007), which suggests that speakers choose among alternative formulations that are allowed in their language the ones that achieve a roughly uniform rate of information transmission. Optional discourse markers should thus be omitted if they would lead to a trough in information density, and be inserted in order to avoid peaks in information density. We here test this hypothesis by observing how far a specific cue, negation in any form, affects the discourse relations that can be predicted to hold in a text, and how the presence of this cue in turn affects the use of explicit discourse connectives.

@inproceedings{asr2015uniform,
title = {Uniform Information Density at the Level of Discourse Relations: Negation Markers and Discourse Connective Omission},
author = {Fatemeh Torabi Asr and Vera Demberg},
url = {https://www.semanticscholar.org/paper/Uniform-Information-Density-at-the-Level-of-Markers-Asr-Demberg/cee6437e3aba3e772ef8cc7e9aaf3d7ba1114d8b},
year = {2015},
date = {2015},
booktitle = {IWCS 2015},
pages = {118},
abstract = {About half of the discourse relations annotated in Penn Discourse Treebank (Prasad et al., 2008) are not explicitly marked using a discourse connective. But we do not have extensive theories of when or why a discourse relation is marked explicitly or when the connective is omitted. Asr and Demberg (2012a) have suggested an information-theoretic perspective according to which discourse connectives are more likely to be omitted when they are marking a relation that is expected or predictable. This account is based on the Uniform Information Density theory (Levy and Jaeger, 2007), which suggests that speakers choose among alternative formulations that are allowed in their language the ones that achieve a roughly uniform rate of information transmission. Optional discourse markers should thus be omitted if they would lead to a trough in information density, and be inserted in order to avoid peaks in information density. We here test this hypothesis by observing how far a specific cue, negation in any form, affects the discourse relations that can be predicted to hold in a text, and how the presence of this cue in turn affects the use of explicit discourse connectives.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B2

Sayeed, Asad; Fischer, Stefan; Demberg, Vera

To What Extent Do We Adapt Spoken Word Durations to a Domain? Inproceedings

Architectures and mechanisms for language processing (AMLaP), Malta, 2015.

@inproceedings{AMLaP2015a,
title = {To What Extent Do We Adapt Spoken Word Durations to a Domain?},
author = {Asad Sayeed and Stefan Fischer and Vera Demberg},
url = {https://www.bibsonomy.org/bibtex/ddebcecc8adb8f40a0abf87294b11a02},
year = {2015},
date = {2015},
booktitle = {Architectures and mechanisms for language processing (AMLaP)},
address = {Malta},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B2

Malisz, Zofia; Schulz, Erika; Oh, Yoon Mi; Andreeva, Bistra; Möbius, Bernd

Dimensions of segmental variability: relationships between information density and prosodic structure Inproceedings

Workshop "Modeling variability in speech", Stuttgart, 2015.

Contextual predictability variation affects phonological and phonetic structure. Reduction and expansion of acoustic-phonetic features is also characteristic of prosodic variability. In this study, we assess the impact of surprisal and prosodic structure on phonetic encoding, both independently of each other and in interaction. We model segmental duration, vowel space size and spectral characteristics of vowels and consonants as a function of surprisal as well as of syllable prominence, phrase boundary, and speech rate. Correlates of phonetic encoding density are extracted from a subset of the BonnTempo corpus for six languages: American English, Czech, Finnish, French, German, and Polish. Surprisal is estimated from segmental n-gram language models trained on large text corpora. Our findings are generally compatible with a weak version of Aylett and Turk’s Smooth Signal Redundancy hypothesis, suggesting that prosodic structure mediates between the requirements of efficient communication and the speech signal. However, this mediation is not perfect, as we found evidence for additional, direct effects of changes in surprisal on the phonetic structure of utterances. These effects appear to be stable across different speech rates.

@inproceedings{malisz15,
title = {Dimensions of segmental variability: relationships between information density and prosodic structure},
author = {Zofia Malisz and Erika Schulz and Yoon Mi Oh and Bistra Andreeva and Bernd M{\"o}bius},
url = {https://www.frontiersin.org/articles/10.3389/fcomm.2018.00025/full},
year = {2015},
date = {2015},
booktitle = {Workshop "Modeling variability in speech"},
address = {Stuttgart},
abstract = {

Contextual predictability variation affects phonological and phonetic structure. Reduction and expansion of acoustic-phonetic features is also characteristic of prosodic variability. In this study, we assess the impact of surprisal and prosodic structure on phonetic encoding, both independently of each other and in interaction. We model segmental duration, vowel space size and spectral characteristics of vowels and consonants as a function of surprisal as well as of syllable prominence, phrase boundary, and speech rate. Correlates of phonetic encoding density are extracted from a subset of the BonnTempo corpus for six languages: American English, Czech, Finnish, French, German, and Polish. Surprisal is estimated from segmental n-gram language models trained on large text corpora. Our findings are generally compatible with a weak version of Aylett and Turk's Smooth Signal Redundancy hypothesis, suggesting that prosodic structure mediates between the requirements of efficient communication and the speech signal. However, this mediation is not perfect, as we found evidence for additional, direct effects of changes in surprisal on the phonetic structure of utterances. These effects appear to be stable across different speech rates.
},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   C1

Khamis, Ashraf; Degaetano-Ortlieb, Stefania; Kermes, Hannah; Knappen, Jörg; Ordan, Noam; Teich, Elke

A resource for the diachronic study of scientific English: Introducing the Royal Society Corpus Inproceedings

Corpus Linguistics 2015, Lancaster, 2015.
There is a wealth of corpus resources for the study of contemporary scientific English, ranging from written vs. spoken mode to expert vs. learner productions as well as different genres, registers and domains (e.g. MICASE (Simpson et al. 2002), BAWE (Nesi 2011) and SciTex (Degaetano-Ortlieb et al. 2013)). The multi-genre corpora of English (notably BNC and COCA) include fair amounts of scientific text too.

@inproceedings{Khamis-etal2015,
title = {A resource for the diachronic study of scientific English: Introducing the Royal Society Corpus},
author = {Ashraf Khamis and Stefania Degaetano-Ortlieb and Hannah Kermes and J{\"o}rg Knappen and Noam Ordan and Elke Teich},
url = {https://www.researchgate.net/publication/331648570_A_resource_for_the_diachronic_study_of_scientific_English_Introducing_the_Royal_Society_Corpus},
year = {2015},
date = {2015-07-01},
booktitle = {Corpus Linguistics 2015},
address = {Lancaster},
abstract = {

There is a wealth of corpus resources for the study of contemporary scientific English, ranging from written vs. spoken mode to expert vs. learner productions as well as different genres, registers and domains (e.g. MICASE (Simpson et al. 2002), BAWE (Nesi 2011) and SciTex (Degaetano-Ortlieb et al. 2013)). The multi-genre corpora of English (notably BNC and COCA) include fair amounts of scientific text too.
},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B1

Degaetano-Ortlieb, Stefania; Kermes, Hannah; Khamis, Ashraf; Knappen, Jörg; Teich, Elke

Information Density in Scientific Writing: A Diachronic Perspective Inproceedings

"Challenging Boundaries" - 42nd International Systemic Functional Congress (ISFCW2015), RWTH Aachen University, 2015.
We report on a project investigating the development of scientific writing in English from the mid-17th century to present. While scientific discourse is a much researched topic, including its historical development (see e.g. Banks (2008) in the context of Systemic Functional Grammar), it has so far not been modeled from the perspective of information density. Our starting assumption is that as science develops to be an established socio-cultural domain, it becomes more specialized and conventionalized. Thus, denser linguistic encodings are required for communication to be functional, potentially increasing the information density of scientific texts (cf. Halliday and Martin, 1993:54-68). More specifically, we pursue the following hypotheses: (1) As a reflex of specialization, scientific texts will exhibit a greater encoding density over time, i.e. denser linguistic forms will be increasingly used. (2) As a reflex of conventionalization, scientific texts will exhibit greater linguistic uniformity over time, i.e. the linguistic forms used will be less varied. We further assume that the effects of specialization and conventionalization in the linguistic signal are measurable independently in terms of information density (see below). We have built a diachronic corpus of scientific texts from the Transactions and Proceedings of the Royal Society of London. We have chosen these materials due to the prominent role of the Royal Society in forming English scientific discourse (cf. Atkinson, 1998). At the time of writing, the corpus comprises 23 million tokens for the period of 1665-1870 and has been normalized, tokenized and part-of-speech tagged. For analysis, we combine methods from register theory (Halliday and Hasan, 1985) and computational language modeling (Manning et al., 2009: 237-240). The former provides us with features that are potentially register-forming (cf. also Ure, 1971; 1982); the latter provides us with models with which we can measure information density. For analysis, we pursue two complementary methodological approaches: (a) Pattern-based extraction and quantification of linguistic constructions that are potentially involved in manipulating information density. Here, basically all linguistic levels are relevant (cf. Harris, 1991), from lexis and grammar to cohesion and generic structure. We have started with the level of lexico-grammar, inspecting for instance morphological compression (derivational processes such as conversion, compounding etc.) and syntactic reduction (e.g. reduced vs full relative clauses). (b) Measuring information density using information-theoretic models (cf. Shannon, 1949). In current practice, information density is measured as the probability of an item conditioned by context. For our purposes, we need to compare such probability distributions to assess the relative information density of texts along a time line. In the talk, we introduce our corpus (metadata, preprocessing, linguistic annotation) and present selected analyses of relative information density and associated linguistic variation in the given time period (1665-1870).

@inproceedings{Degaetano-etal2015b,
title = {Information Density in Scientific Writing: A Diachronic Perspective},
author = {Stefania Degaetano-Ortlieb and Hannah Kermes and Ashraf Khamis and J{\"o}rg Knappen and Elke Teich},
url = {https://www.researchgate.net/publication/331648534_Information_Density_in_Scientific_Writing_A_Diachronic_Perspective},
year = {2015},
date = {2015-07-01},
booktitle = {"Challenging Boundaries" - 42nd International Systemic Functional Congress (ISFCW2015)},
address = {RWTH Aachen University},
abstract = {

We report on a project investigating the development of scientific writing in English from the mid-17th century to present. While scientific discourse is a much researched topic, including its historical development (see e.g. Banks (2008) in the context of Systemic Functional Grammar), it has so far not been modeled from the perspective of information density. Our starting assumption is that as science develops to be an established socio-cultural domain, it becomes more specialized and conventionalized. Thus, denser linguistic encodings are required for communication to be functional, potentially increasing the information density of scientific texts (cf. Halliday and Martin, 1993:54-68). More specifically, we pursue the following hypotheses: (1) As a reflex of specialization, scientific texts will exhibit a greater encoding density over time, i.e. denser linguistic forms will be increasingly used. (2) As a reflex of conventionalization, scientific texts will exhibit greater linguistic uniformity over time, i.e. the linguistic forms used will be less varied. We further assume that the effects of specialization and conventionalization in the linguistic signal are measurable independently in terms of information density (see below). We have built a diachronic corpus of scientific texts from the Transactions and Proceedings of the Royal Society of London. We have chosen these materials due to the prominent role of the Royal Society in forming English scientific discourse (cf. Atkinson, 1998). At the time of writing, the corpus comprises 23 million tokens for the period of 1665-1870 and has been normalized, tokenized and part-of-speech tagged. For analysis, we combine methods from register theory (Halliday and Hasan, 1985) and computational language modeling (Manning et al., 2009: 237-240). The former provides us with features that are potentially register-forming (cf. also Ure, 1971; 1982); the latter provides us with models with which we can measure information density. For analysis, we pursue two complementary methodological approaches: (a) Pattern-based extraction and quantification of linguistic constructions that are potentially involved in manipulating information density. Here, basically all linguistic levels are relevant (cf. Harris, 1991), from lexis and grammar to cohesion and generic structure. We have started with the level of lexico-grammar, inspecting for instance morphological compression (derivational processes such as conversion, compounding etc.) and syntactic reduction (e.g. reduced vs full relative clauses). (b) Measuring information density using information-theoretic models (cf. Shannon, 1949). In current practice, information density is measured as the probability of an item conditioned by context. For our purposes, we need to compare such probability distributions to assess the relative information density of texts along a time line. In the talk, we introduce our corpus (metadata, preprocessing, linguistic annotation) and present selected analyses of relative information density and associated linguistic variation in the given time period (1665-1870).
},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B1

Greenberg, Clayton; Demberg, Vera; Sayeed, Asad

Verb Polysemy and Frequency Effects in Thematic Fit Modeling Inproceedings

Proceedings of the 6th Workshop on Cognitive Modeling and Computational Linguistics, Association for Computational Linguistics, pp. 48-57, Denver, Colorado, 2015.

While several data sets for evaluating thematic fit of verb-role-filler triples exist, they do not control for verb polysemy. Thus, it is unclear how verb polysemy affects human ratings of thematic fit and how best to model that. We present a new dataset of human ratings on high vs. low-polysemy verbs matched for verb frequency, together with high vs. low-frequency and well-fitting vs. poorly-fitting patient rolefillers. Our analyses show that low-polysemy verbs produce stronger thematic fit judgements than verbs with higher polysemy. Rolefiller frequency, on the other hand, had little effect on ratings. We show that these results can best be modeled in a vector space using a clustering technique to create multiple prototype vectors representing different “senses” of the verb.

@inproceedings{greenberg-demberg-sayeed:2015:CMCL,
title = {Verb Polysemy and Frequency Effects in Thematic Fit Modeling},
author = {Clayton Greenberg and Vera Demberg and Asad Sayeed},
url = {http://www.aclweb.org/anthology/W15-1106},
year = {2015},
date = {2015-06-01},
booktitle = {Proceedings of the 6th Workshop on Cognitive Modeling and Computational Linguistics},
pages = {48-57},
publisher = {Association for Computational Linguistics},
address = {Denver, Colorado},
abstract = {While several data sets for evaluating thematic fit of verb-role-filler triples exist, they do not control for verb polysemy. Thus, it is unclear how verb polysemy affects human ratings of thematic fit and how best to model that. We present a new dataset of human ratings on high vs. low-polysemy verbs matched for verb frequency, together with high vs. low-frequency and well-fitting vs. poorly-fitting patient rolefillers. Our analyses show that low-polysemy verbs produce stronger thematic fit judgements than verbs with higher polysemy. Rolefiller frequency, on the other hand, had little effect on ratings. We show that these results can best be modeled in a vector space using a clustering technique to create multiple prototype vectors representing different “senses” of the verb.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Projects:   B2 B4

Crocker, Matthew W.; Demberg, Vera; Teich, Elke

Information Density and Linguistic Encoding (IDeaL) Journal Article

KI - Künstliche Intelligenz, 30, pp. 77-81, 2015.

We introduce IDEAL (Information Density and Linguistic Encoding), a collaborative research center that investigates the hypothesis that language use may be driven by the optimal use of the communication channel. From the point of view of linguistics, our approach promises to shed light on selected aspects of language variation that are hitherto not sufficiently explained. Applications of our research can be envisaged in various areas of natural language processing and AI, including machine translation, text generation, speech synthesis and multimodal interfaces.

@article{crocker:demberg:teich,
title = {Information Density and Linguistic Encoding (IDeaL)},
author = {Matthew W. Crocker and Vera Demberg and Elke Teich},
url = {http://link.springer.com/article/10.1007/s13218-015-0391-y/fulltext.html},
doi = {https://doi.org/10.1007/s13218-015-0391-y},
year = {2015},
date = {2015},
journal = {KI - K{\"u}nstliche Intelligenz},
pages = {77-81},
volume = {30},
number = {1},
abstract = {

We introduce IDEAL (Information Density and Linguistic Encoding), a collaborative research center that investigates the hypothesis that language use may be driven by the optimal use of the communication channel. From the point of view of linguistics, our approach promises to shed light on selected aspects of language variation that are hitherto not sufficiently explained. Applications of our research can be envisaged in various areas of natural language processing and AI, including machine translation, text generation, speech synthesis and multimodal interfaces.
},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Projects:   A1 A3 B1

Kampmann, Alexander; Thater, Stefan; Pinkal, Manfred

A Case-Study of Automatic Participant Labeling Inproceedings

Proceedings of the International Conference of the German Society for Computational Linguistics and Language Technology (GSCL 2015), 2015.

Knowlegde about stereotypical activities like visiting a restaurant or checking in at the airport is an important component to model text-understanding. We report on a case study of automatically relating texts to scripts representing such stereotypical knowledge. We focus on the subtask of mapping noun phrases in a text to participants in the script. We analyse the effect of various similarity measures and show that substantial positive results can be achieved on this complex task, indicating that the general problem is principally solvable.

@inproceedings{kampmann2015case,
title = {A Case-Study of Automatic Participant Labeling},
author = {Alexander Kampmann and Stefan Thater and Manfred Pinkal},
url = {https://www.bibsonomy.org/bibtex/256c2839962cccb21f7a2d41b3a83267?postOwner=sfb1102&intraHash=132779a64f2563005c65ee9cc14beb5f},
year = {2015},
date = {2015},
booktitle = {Proceedings of the International Conference of the German Society for Computational Linguistics and Language Technology (GSCL 2015)},
abstract = {Knowlegde about stereotypical activities like visiting a restaurant or checking in at the airport is an important component to model text-understanding. We report on a case study of automatically relating texts to scripts representing such stereotypical knowledge. We focus on the subtask of mapping noun phrases in a text to participants in the script. We analyse the effect of various similarity measures and show that substantial positive results can be achieved on this complex task, indicating that the general problem is principally solvable.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   A2

Rohrbach, Marcus; Rohrbach, Anna; Regneri, Michaela; Amin, Sikandar; Andriluka, Mykhaylo; Pinkal, Manfred; Schiele, Bernt

Recognizing Fine-Grained and Composite Activities using Hand-Centric Features and Script Data Journal Article

International Journal of Computer Vision, pp. 1-28, 2015.

Activity recognition has shown impressive progress in recent years. However, the challenges of detecting fine-grained activities and understanding how they are combined into composite activities have been largely overlooked. In this work we approach both tasks and present a dataset which provides detailed annotations to address them. The first challenge is to detect fine-grained activities, which are defined by low inter-class variability and are typically characterized by fine-grained body motions. We explore how human pose and hands can help to approach this challenge by comparing two pose-based and two hand-centric features with state-of-the-art holistic features. To attack the second challenge, recognizing composite activities, we leverage the fact that these activities are compositional and that the essential components of the activities can be obtained from textual descriptions or scripts. We show the benefits of our hand-centric approach for fine-grained activity classification and detection. For composite activity recognition we find that decomposition into attributes allows sharing information across composites and is essential to attack this hard task. Using script data we can recognize novel composites without having training data for them.

@article{rohrbach2015recognizing,
title = {Recognizing Fine-Grained and Composite Activities using Hand-Centric Features and Script Data},
author = {Marcus Rohrbach and Anna Rohrbach and Michaela Regneri and Sikandar Amin and Mykhaylo Andriluka and Manfred Pinkal and Bernt Schiele},
url = {https://link.springer.com/article/10.1007/s11263-015-0851-8},
year = {2015},
date = {2015},
journal = {International Journal of Computer Vision},
pages = {1-28},
abstract = {

Activity recognition has shown impressive progress in recent years. However, the challenges of detecting fine-grained activities and understanding how they are combined into composite activities have been largely overlooked. In this work we approach both tasks and present a dataset which provides detailed annotations to address them. The first challenge is to detect fine-grained activities, which are defined by low inter-class variability and are typically characterized by fine-grained body motions. We explore how human pose and hands can help to approach this challenge by comparing two pose-based and two hand-centric features with state-of-the-art holistic features. To attack the second challenge, recognizing composite activities, we leverage the fact that these activities are compositional and that the essential components of the activities can be obtained from textual descriptions or scripts. We show the benefits of our hand-centric approach for fine-grained activity classification and detection. For composite activity recognition we find that decomposition into attributes allows sharing information across composites and is essential to attack this hard task. Using script data we can recognize novel composites without having training data for them.
},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   A2

Klakow, Dietrich; Avgustinova, Tania; Stenger, Irina; Fischer, Andrea; Jágrová, Klára

The INCOMSLAV Project Inproceedings

Seminar in formal linguistics at ÚFAL, Charles University, Prague, 2014.

The human language processing mechanism shows a remarkable robustness with different kinds of imperfect linguistic signal. The INCOMSLAV project aims at gaining insights about human retrieval of information in the mode of intercomprehension, i.e. from texts in genetically related languages not acquired through language learning. Furthermore it adds to this synchronic approach a diachronic perspective which provides the vital common denominator in establishing the extent of linguistic proximity. The languages to be analysed are chosen from the group of Slavic languages (CZ, PL, RU, BG). Whereas the possibility of intercomprehension between related languages is a generally accepted fact and the ways it functions have been studied for certain language groups, such analyses have not yet been undertaken from a systematic point of view focusing on information en- and decoding at different linguistic levels. The research programme will bring together results from the analysis of parallel corpora and from a variety of experiments with native speakers of Slavic languages and will compare them with insights of comparative historical linguistics on the relationship between Slavic languages. The results should add a cross-linguistic perspective to the question of how language users master high degrees of surprisal (due to partial incomprehensibility) and extract information from “noisy” code.

@inproceedings{dietrich2014incomslav,
title = {The INCOMSLAV Project},
author = {Dietrich Klakow and Tania Avgustinova and Irina Stenger and Andrea Fischer and Kl{\'a}ra J{\'a}grov{\'a}},
url = {https://ufal.mff.cuni.cz/events/incomslav-project},
year = {2014},
date = {2014},
booktitle = {Seminar in formal linguistics at ÚFAL},
publisher = {Charles University},
address = {Prague},
abstract = {The human language processing mechanism shows a remarkable robustness with different kinds of imperfect linguistic signal. The INCOMSLAV project aims at gaining insights about human retrieval of information in the mode of intercomprehension, i.e. from texts in genetically related languages not acquired through language learning. Furthermore it adds to this synchronic approach a diachronic perspective which provides the vital common denominator in establishing the extent of linguistic proximity. The languages to be analysed are chosen from the group of Slavic languages (CZ, PL, RU, BG). Whereas the possibility of intercomprehension between related languages is a generally accepted fact and the ways it functions have been studied for certain language groups, such analyses have not yet been undertaken from a systematic point of view focusing on information en- and decoding at different linguistic levels. The research programme will bring together results from the analysis of parallel corpora and from a variety of experiments with native speakers of Slavic languages and will compare them with insights of comparative historical linguistics on the relationship between Slavic languages. The results should add a cross-linguistic perspective to the question of how language users master high degrees of surprisal (due to partial incomprehensibility) and extract information from “noisy” code.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   C4

Successfully