Publications

Fischer, Andrea; Vreeken, Jilles; Klakow, Dietrich

Beyond Pairwise Similarity: Quantifying and Characterizing Linguistic Similarity between Groups of Languages by MDL Journal Article

Computación y Systems, 21, pp. 829-839, 2017.
We present a minimum description length based algorithm for finding the regular correspondences between related languages and show how it can be used to quantify the similarity between not only pairs, but whole groups of languages directly from cognate sets. We employ a two-part code, which allows to use the data and model complexity of the discovered correspondences as information-theoretic quantifications of the degree of regularity of cognate realizations in these languages. Unlike previous work, our approach is not limited to pairs of languages, does not limit the size of discovered correspondences, does not make assumptions about the shape or distribution of correspondences, and requires no expert knowledge or fine-tuning of parameters. We here test our approach on the Slavic languages. In a pairwise analysis of 13 Slavic languages, we show that our algorithm replicates their linguistic classification exactly. In a four-language experiment, we demonstrate how our algorithm efficiently quantifies similarity between all subsets of the analyzed four languages and find that it is excellently suited to quantifying the orthographic regularity of closely-related languages.

@article{Fischer2017,
title = {Beyond Pairwise Similarity: Quantifying and Characterizing Linguistic Similarity between Groups of Languages by MDL},
author = {Andrea Fischer and Jilles Vreeken and Dietrich Klakow},
url = {http://www.cys.cic.ipn.mx/ojs/index.php/CyS/article/view/2865},
year = {2017},
date = {2017},
journal = {Computación y Systems},
pages = {829-839},
volume = {21},
number = {4},
abstract = {

We present a minimum description length based algorithm for finding the regular correspondences between related languages and show how it can be used to quantify the similarity between not only pairs, but whole groups of languages directly from cognate sets. We employ a two-part code, which allows to use the data and model complexity of the discovered correspondences as information-theoretic quantifications of the degree of regularity of cognate realizations in these languages. Unlike previous work, our approach is not limited to pairs of languages, does not limit the size of discovered correspondences, does not make assumptions about the shape or distribution of correspondences, and requires no expert knowledge or fine-tuning of parameters. We here test our approach on the Slavic languages. In a pairwise analysis of 13 Slavic languages, we show that our algorithm replicates their linguistic classification exactly. In a four-language experiment, we demonstrate how our algorithm efficiently quantifies similarity between all subsets of the analyzed four languages and find that it is excellently suited to quantifying the orthographic regularity of closely-related languages.
},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   C4

Jágrová, Klára; Stenger, Irina; Marti, Roland; Avgustinova, Tania

Lexical and orthographic distances between Bulgarian, Czech, Polish, and Russian: A comparative analysis of the most frequent nouns Inproceedings

Joseph Emonds & Markéta Janebová (eds.), Language Use and Linguistic Structure. Proceedings of the Olomouc Linguistics Colloquium 2016, pp. 401–416, Olomouc: Palacký University, 2017.

@inproceedings{Klára2017,
title = {Lexical and orthographic distances between Bulgarian, Czech, Polish, and Russian: A comparative analysis of the most frequent nouns},
author = {Kl{\'a}ra J{\'a}grov{\'a} and Irina Stenger and Roland Marti and Tania Avgustinova},
year = {2017},
date = {2017},
booktitle = {Joseph Emonds & Mark{\'e}ta Janebov{\'a} (eds.), Language Use and Linguistic Structure. Proceedings of the Olomouc Linguistics Colloquium 2016},
pages = {401–416},
address = {Olomouc: Palacký University},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   C4

Stenger, Irina; Jágrová, Klára; Fischer, Andrea; Avgustinova, Tania; Klakow, Dietrich; Marti, Roland

Modeling the Impact of Orthographic Coding on Czech-Polish and Bulgarian-Russian Reading Intercomprehension Journal Article

Nordic Journal of Linguistic, 40, pp. 175-199, 2017.

Focusing on orthography as a primary linguistic interface in every reading activity, the central research question we address here is how orthographic intelligibility can be measured and predicted between closely related languages. This paper presents methods and findings of modeling orthographic intelligibility in a reading intercomprehension scenario from the information-theoretic perspective. The focus of the study is on two Slavic language pairs: Czech–Polish (West Slavic, using the Latin script) and Bulgarian–Russian (South Slavic and East Slavic, respectively, using the Cyrillic script). In this article, we present computational methods for measuring orthographic distance and orthographic asymmetry by means of the Levenshtein algorithm, conditional entropy and adaptation surprisal method that are expected to predict the influence of orthography on mutual intelligibility in reading.

@article{Stenger2017b,
title = {Modeling the Impact of Orthographic Coding on Czech-Polish and Bulgarian-Russian Reading Intercomprehension},
author = {Irina Stenger and Kl{\'a}ra J{\'a}grov{\'a} and Andrea Fischer and Tania Avgustinova and Dietrich Klakow and Roland Marti},
url = {https://www.cambridge.org/core/journals/nordic-journal-of-linguistics/article/modeling-the-impact-of-orthographic-coding-on-czechpolish-and-bulgarianrussian-reading-intercomprehension/363BEB5C556DFBDAC7FEED0AE06B06AA},
year = {2017},
date = {2017},
journal = {Nordic Journal of Linguistic},
pages = {175-199},
volume = {40},
number = {2},
abstract = {

Focusing on orthography as a primary linguistic interface in every reading activity, the central research question we address here is how orthographic intelligibility can be measured and predicted between closely related languages. This paper presents methods and findings of modeling orthographic intelligibility in a reading intercomprehension scenario from the information-theoretic perspective. The focus of the study is on two Slavic language pairs: Czech–Polish (West Slavic, using the Latin script) and Bulgarian–Russian (South Slavic and East Slavic, respectively, using the Cyrillic script). In this article, we present computational methods for measuring orthographic distance and orthographic asymmetry by means of the Levenshtein algorithm, conditional entropy and adaptation surprisal method that are expected to predict the influence of orthography on mutual intelligibility in reading.
},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   C4

Stenger, Irina; Avgustinova, Tania; Marti, Roland

Levenshtein distance and word adaptation surprisal as methods of measuring mutual intelligibility in reading comprehension of Slavic languages Inproceedings

Computational Linguistics and Intellectual Technologies: International Conference "Dialogue 2017" , 1, pp. 304-317, 2017.

In this article we validate two measuring methods: Levenshtein distance and word adaptation surprisal as potential predictors of success in reading intercomprehension. We investigate to what extent orthographic distances between Russian and other East Slavic (Ukrainian, Belarusian) and South Slavic (Bulgarian, Macedonian, Serbian) languages found by means of the Levenshtein algorithm and word adaptation surprisal correlate with comprehension of unknown Slavic languages on the basis of data obtained from Russian native speakers in online free translation task experiments. We try to find an answer to the following question: Can measuring methods such as Levenshtein distance and word adaptation surprisal be considered as a good approximation of orthographic intelligibility of unknown Slavic languages using the Cyrillic script?

@inproceedings{Stenger2017,
title = {Levenshtein distance and word adaptation surprisal as methods of measuring mutual intelligibility in reading comprehension of Slavic languages},
author = {Irina Stenger and Tania Avgustinova and Roland Marti},
url = {https://www.semanticscholar.org/paper/Levenshtein-Distance-anD-WorD-aDaptation-surprisaL-Distance/6103d388cb0398b89dec8ca36ec0be025bb6dea2},
year = {2017},
date = {2017},
booktitle = {Computational Linguistics and Intellectual Technologies: International Conference "Dialogue 2017"},
pages = {304-317},
abstract = {In this article we validate two measuring methods: Levenshtein distance and word adaptation surprisal as potential predictors of success in reading intercomprehension. We investigate to what extent orthographic distances between Russian and other East Slavic (Ukrainian, Belarusian) and South Slavic (Bulgarian, Macedonian, Serbian) languages found by means of the Levenshtein algorithm and word adaptation surprisal correlate with comprehension of unknown Slavic languages on the basis of data obtained from Russian native speakers in online free translation task experiments. We try to find an answer to the following question: Can measuring methods such as Levenshtein distance and word adaptation surprisal be considered as a good approximation of orthographic intelligibility of unknown Slavic languages using the Cyrillic script?},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   C4

Jágrová, Klára; Stenger, Irina; Avgustinova, Tania; Marti, Roland

POLSKI TO JEZYK NIESKOMPLIKOWANY? Theoretische und praktische Interkomprehension der 100 häufigsten polnischen Substantive Journal Article

In Polnisch in Deutschland. Zeitschrift der Bundesvereinigung der Polnischlehrkräfte, 4/2016, pp. 5-19, 2017.

@article{Jágrová2017,
title = {POLSKI TO JEZYK NIESKOMPLIKOWANY? Theoretische und praktische Interkomprehension der 100 h{\"a}ufigsten polnischen Substantive},
author = {Kl{\'a}ra J{\'a}grov{\'a} and Irina Stenger and Tania Avgustinova and Roland Marti},
year = {2017},
date = {2017},
journal = {In Polnisch in Deutschland. Zeitschrift der Bundesvereinigung der Polnischlehrkr{\"a}fte},
pages = {5-19},
volume = {4/2016},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   C4

Tourtouri, Elli; Delogu, Francesca; Crocker, Matthew W.

The interplay of specificity and referential entropy reduction in situated communication Inproceedings

10th Annual Embodied and Situated Language (ESLP) Conference, Higher School of Economics, Moscow, Russia, 2017.

In situated communication, reference can be established with expressions conveying either precise (Minimally-Specified, MS) or redundant (Over-Specified, OS) information. For example, while in Figure 1, “Find the blue ball” identifies exactly one object in all panels, only in the top displays is the adjective required. There is no consensus, however, concerning whether OS hinders processing (e.g., Engelhardt et al., 2011) or not (e.g., Tourtouri et al., 2015). Additionally, as incoming words incrementally restrict the referential domain, they contribute to the reduction of uncertainty regarding the target (i.e., referential entropy). Depending on the distribution of objects, the same utterance results in different entropy reduction profiles: “blue” reduces entropy by 1.58 bits in the right panels, and by .58 bits in the left ones, while “ball” reduces entropy by 1 and 2 bits, respectively. Thus, the adjective modulates the distribution of entropy reduction, resulting in uniform (UR) or non-uniform (NR) reduction profiles. This study seeks to establish whether referential processing is facilitated: a) by the use of redundant pre-nominal modification (OS), b) by the uniform reduction of entropy (cf. Jaeger, 2010), and c) when these two factors interact. Results from inspection probabilities and the Index of Cognitive Activity — a pupillometric measure of cognitive workload (Demberg & Sayeed, 2016) — indicate that processing was facilitated for both OS and UR, while fixation probabilities show a greater advantage for OS-UR. In conclusion, efficient processing is determined by both informativity of the reference and the rate of entropy reduction.

@inproceedings{Tourtourietal2017d,
title = {The interplay of specificity and referential entropy reduction in situated communication},
author = {Elli Tourtouri and Francesca Delogu and Matthew W. Crocker},
url = {https://www.researchgate.net/publication/322556329_The_interplay_of_specificity_and_referential_entropy_reduction_in_situated_communication},
year = {2017},
date = {2017},
booktitle = {10th Annual Embodied and Situated Language (ESLP) Conference},
publisher = {Higher School of Economics},
address = {Moscow, Russia},
abstract = {In situated communication, reference can be established with expressions conveying either precise (Minimally-Specified, MS) or redundant (Over-Specified, OS) information. For example, while in Figure 1, “Find the blue ball” identifies exactly one object in all panels, only in the top displays is the adjective required. There is no consensus, however, concerning whether OS hinders processing (e.g., Engelhardt et al., 2011) or not (e.g., Tourtouri et al., 2015). Additionally, as incoming words incrementally restrict the referential domain, they contribute to the reduction of uncertainty regarding the target (i.e., referential entropy). Depending on the distribution of objects, the same utterance results in different entropy reduction profiles: “blue” reduces entropy by 1.58 bits in the right panels, and by .58 bits in the left ones, while “ball” reduces entropy by 1 and 2 bits, respectively. Thus, the adjective modulates the distribution of entropy reduction, resulting in uniform (UR) or non-uniform (NR) reduction profiles. This study seeks to establish whether referential processing is facilitated: a) by the use of redundant pre-nominal modification (OS), b) by the uniform reduction of entropy (cf. Jaeger, 2010), and c) when these two factors interact. Results from inspection probabilities and the Index of Cognitive Activity — a pupillometric measure of cognitive workload (Demberg & Sayeed, 2016) — indicate that processing was facilitated for both OS and UR, while fixation probabilities show a greater advantage for OS-UR. In conclusion, efficient processing is determined by both informativity of the reference and the rate of entropy reduction.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Projects:   A1 C3

Tourtouri, Elli; Delogu, Francesca; Crocker, Matthew W.

Overspecification and uniform reduction of visual entropy facilitate referential processing Inproceedings

XPrag2017, Cologne, Germany, 2017.

Over-specifications (OS) are expressions that provide more information than minimally required for the identification of a referent, thereby violating Grice’s 2nd Quantity Maxim [1]. In Figure 1, for example, the expression “Find the blue ball” identifies exactly one object in all panels, but only in the top displays is the adjective required to disambiguate the target. In recent years, psycholinguistic research has tried to test the empirical validity of Grice’s Maxim, resulting in conflicting findings. That is, there is evidence both that OS hinders [2,3] and that it facilitates [4,5] referential processing. The current study investigates the influence of OS on visually-situated processing, when the context allows both a minimally-specified (MS) and an OS interpretation of pre-nominal adjectives (cf. Fig.1). Additionally, as the utterance unfolds over time, incoming words incrementally restrict the search space. In this sense, information on “blue” and “ball” is determined not only by their probability to occur in this context, but also by the amount of uncertainty about the target they reduce — in information theoretic terms [6]. A greater reduction of the referential set size on the adjective (A&C) results in a more uniform reduction profile (Uniform Reduction, UR), as the adjective reduces entropy by 1.58 bits and the noun by 1 bit. On the other hand, a moderate reduction of the set size on the adjective (B&D) results in a less uniform reduction profile (Nonuniform Reduction, NR): the adjective reduces entropy by .58 bits and the noun by 2 bits. This study examines whether, above and beyond any effects of specificity, the rate at which incoming words reduce visual entropy also affects referential processing.

@inproceedings{Tourtourietal2017b,
title = {Overspecification and uniform reduction of visual entropy facilitate referential processing},
author = {Elli Tourtouri and Francesca Delogu and Matthew W. Crocker},
url = {https://www.researchgate.net/publication/322571202_Over-specification_Uniform_Reduction_of_Visual_Entropy_Facilitate_Referential_Processing},
year = {2017},
date = {2017},
booktitle = {XPrag2017},
address = {Cologne, Germany},
abstract = {Over-specifications (OS) are expressions that provide more information than minimally required for the identification of a referent, thereby violating Grice’s 2nd Quantity Maxim [1]. In Figure 1, for example, the expression “Find the blue ball” identifies exactly one object in all panels, but only in the top displays is the adjective required to disambiguate the target. In recent years, psycholinguistic research has tried to test the empirical validity of Grice’s Maxim, resulting in conflicting findings. That is, there is evidence both that OS hinders [2,3] and that it facilitates [4,5] referential processing. The current study investigates the influence of OS on visually-situated processing, when the context allows both a minimally-specified (MS) and an OS interpretation of pre-nominal adjectives (cf. Fig.1). Additionally, as the utterance unfolds over time, incoming words incrementally restrict the search space. In this sense, information on “blue” and “ball” is determined not only by their probability to occur in this context, but also by the amount of uncertainty about the target they reduce — in information theoretic terms [6]. A greater reduction of the referential set size on the adjective (A&C) results in a more uniform reduction profile (Uniform Reduction, UR), as the adjective reduces entropy by 1.58 bits and the noun by 1 bit. On the other hand, a moderate reduction of the set size on the adjective (B&D) results in a less uniform reduction profile (Nonuniform Reduction, NR): the adjective reduces entropy by .58 bits and the noun by 2 bits. This study examines whether, above and beyond any effects of specificity, the rate at which incoming words reduce visual entropy also affects referential processing.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Projects:   A1 C3

Jachmann, Torsten; Drenhaus, Heiner; Staudte, Maria; Crocker, Matthew W.

The Influence of Speaker's Gaze on Sentence Comprehension: An ERP Investigation Inproceedings

Proceedings of the 39th Annual Conference of the Cognitive Science Society, pp. 2261-2266, 2017.

Behavioral studies demonstrate the influence of speaker gaze in visually-situated spoken language comprehension. We present an ERP experiment examining the influence of speaker’s gaze congruency on listeners’ comprehension of referential expressions related to a shared visual scene. We demonstrate that listeners exploit speakers’ gaze toward objects in order to form sentence continuation expectations: Compared to a congruent gaze condition, we observe an increased N400 when (a) the lack of gaze (neutral) does not allow for upcoming noun prediction, and (b) when the noun violates gaze-driven expectations (incongruent). The later also results in a late (sustained) positivity, reflecting the need to update the assumed situation model. We take the combination of the N400 and late positivity as evidence that speaker gaze influences both lexical retrieval and integration processes, respectively (Brouwer et al., in press). Moreover, speaker gaze is interpreted as reflecting referential intentions (Staudte & Crocker, 2011).

@inproceedings{Jachmann2017,
title = {The Influence of Speaker's Gaze on Sentence Comprehension: An ERP Investigation},
author = {Torsten Jachmann and Heiner Drenhaus and Maria Staudte and Matthew W. Crocker},
url = {https://www.researchgate.net/publication/325969989_The_Influence_of_Speaker%27s_Gaze_on_Sentence_Comprehension_An_ERP_Investigation},
year = {2017},
date = {2017},
booktitle = {Proceedings of the 39th Annual Conference of the Cognitive Science Society},
pages = {2261-2266},
abstract = {Behavioral studies demonstrate the influence of speaker gaze in visually-situated spoken language comprehension. We present an ERP experiment examining the influence of speaker’s gaze congruency on listeners’ comprehension of referential expressions related to a shared visual scene. We demonstrate that listeners exploit speakers’ gaze toward objects in order to form sentence continuation expectations: Compared to a congruent gaze condition, we observe an increased N400 when (a) the lack of gaze (neutral) does not allow for upcoming noun prediction, and (b) when the noun violates gaze-driven expectations (incongruent). The later also results in a late (sustained) positivity, reflecting the need to update the assumed situation model. We take the combination of the N400 and late positivity as evidence that speaker gaze influences both lexical retrieval and integration processes, respectively (Brouwer et al., in press). Moreover, speaker gaze is interpreted as reflecting referential intentions (Staudte & Crocker, 2011).},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   C3

Tourtouri, Elli; Delogu, Francesca; Crocker, Matthew W.

Specificity and entropy reduction in situated referential processing Inproceedings

39th Annual Conference of the Cognitive Science Society, Austin, Texas, USA, 2017.

In situated communication, reference to an entity in the shared visual context can be established using eitheranexpression that conveys precise (minimally specified) or redundant (over-specified) information. There is, however, along-lasting debate in psycholinguistics concerningwhether the latter hinders referential processing. We present evidence from an eyetrackingexperiment recordingfixations as well asthe Index of Cognitive Activity –a novel measure of cognitive workload –supporting the view that over-specifications facilitate processing. We further present originalevidence that, above and beyond the effect of specificity,referring expressions thatuniformly reduce referential entropyalso benefitprocessing

@inproceedings{Tourtouri2017,
title = {Specificity and entropy reduction in situated referential processing},
author = {Elli Tourtouri and Francesca Delogu and Matthew W. Crocker},
url = {https://www.mpi.nl/publications/item3309545/specificity-and-entropy-reduction-situated-referential-processing},
year = {2017},
date = {2017},
booktitle = {39th Annual Conference of the Cognitive Science Society},
address = {Austin, Texas, USA},
abstract = {In situated communication, reference to an entity in the shared visual context can be established using eitheranexpression that conveys precise (minimally specified) or redundant (over-specified) information. There is, however, along-lasting debate in psycholinguistics concerningwhether the latter hinders referential processing. We present evidence from an eyetrackingexperiment recordingfixations as well asthe Index of Cognitive Activity –a novel measure of cognitive workload –supporting the view that over-specifications facilitate processing. We further present originalevidence that, above and beyond the effect of specificity,referring expressions thatuniformly reduce referential entropyalso benefitprocessing},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   C3

Sikos, Les; Greenberg, Clayton; Drenhaus, Heiner; Crocker, Matthew W.

Information density of encodings: The role of syntactic variation in comprehension Inproceedings

Proceedings of the 39th Annual Conference of the Cognitive Science Society(CogSci 2017), pp. 3168-3173, Austin, Texas, USA, 2017.

The Uniform Information Density (UID) hypothesis links production strategies with comprehension processes, predicting that speakers will utilize flexibility in encoding in order to increase uniformity in the rate of information transmission, as measured by surprisal (Jaeger, 2010). Evidence in support of UID comes primarily from studies focusing on word-level effects, e.g. demonstrating that surprisal predicts the omission/inclusion of optional words. Here we investigate whether comprehenders are sensitive to the information density of alternative encodings that are more syntactically complex. We manipulated the syntactic encoding of complex noun phrases in German via meaning-preserving pre-nominal and post-nominal modification in contexts that were either predictive or non-predictive. We then used the G-maze reading task to measure online comprehension during self-paced reading. The results are consistent with the UID hypothesis. Length-adjusted reading times were facilitated for pre-nominally modified head nouns, and this effect was larger in non-predictive contexts.

@inproceedings{Sikos2017,
title = {Information density of encodings: The role of syntactic variation in comprehension},
author = {Les Sikos and Clayton Greenberg and Heiner Drenhaus and Matthew W. Crocker},
url = {https://www.semanticscholar.org/paper/Information-density-of-encodings%3A-The-role-of-in-Sikos-Greenberg/06a47324b53bc53e0e4762fd1547091d8b2392f1},
year = {2017},
date = {2017},
booktitle = {Proceedings of the 39th Annual Conference of the Cognitive Science Society(CogSci 2017)},
pages = {3168-3173},
address = {Austin, Texas, USA},
abstract = {The Uniform Information Density (UID) hypothesis links production strategies with comprehension processes, predicting that speakers will utilize flexibility in encoding in order to increase uniformity in the rate of information transmission, as measured by surprisal (Jaeger, 2010). Evidence in support of UID comes primarily from studies focusing on word-level effects, e.g. demonstrating that surprisal predicts the omission/inclusion of optional words. Here we investigate whether comprehenders are sensitive to the information density of alternative encodings that are more syntactically complex. We manipulated the syntactic encoding of complex noun phrases in German via meaning-preserving pre-nominal and post-nominal modification in contexts that were either predictive or non-predictive. We then used the G-maze reading task to measure online comprehension during self-paced reading. The results are consistent with the UID hypothesis. Length-adjusted reading times were facilitated for pre-nominally modified head nouns, and this effect was larger in non-predictive contexts.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   C3

Calvillo, Jesús

Fast and Easy: Approximating uniform information density in language production Inproceedings

39th Annual Conference of the Cognitive Science Society, Austin, Texas, USA, 2017.

A model of sentence production is presented, which implements a strategy that produces sentences with more uniform surprisal profiles, as compared to other strategies, and in accordance to the Uniform Information Density Hypothesis (Jaeger, 2006; Levy & Jaeger, 2007). The model operates at the algorithmic level combining information concerning word probabilities and sentence lengths, representing a first attempt to model UID as resulting from underlying factors during language production. The sentences produced by this model showed indeed the expected tendency, having more uniform surprisal profiles and lower average word surprisal, in comparison to other production strategies.

@inproceedings{Calvillo2017,
title = {Fast and Easy: Approximating uniform information density in language production},
author = {Jesús Calvillo},
url = {https://cogsci.mindmodeling.org/2017/papers/0333/paper0333.pdf},
year = {2017},
date = {2017},
publisher = {39th Annual Conference of the Cognitive Science Society},
address = {Austin, Texas, USA},
abstract = {A model of sentence production is presented, which implements a strategy that produces sentences with more uniform surprisal profiles, as compared to other strategies, and in accordance to the Uniform Information Density Hypothesis (Jaeger, 2006; Levy & Jaeger, 2007). The model operates at the algorithmic level combining information concerning word probabilities and sentence lengths, representing a first attempt to model UID as resulting from underlying factors during language production. The sentences produced by this model showed indeed the expected tendency, having more uniform surprisal profiles and lower average word surprisal, in comparison to other production strategies.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   C3

Oualil, Youssef

Sequential estimation techniques and application to multiple speaker tracking and language modeling PhD Thesis

Saarland University, Saarbruecken, Germany, 2017.

For many real-word applications, the considered data is given as a time sequence that becomes available in an orderly fashion, where the order incorporates important information about the entities of interest. The work presented in this thesis deals with two such cases by introducing new sequential estimation solutions. More precisely, we introduce a: I. Sequential Bayesian estimation framework to solve the multiple speaker localization, detection and tracking problem. This framework is a complete pipeline that includes 1) new observation estimators, which extract a fixed number of potential locations per time frame; 2) new unsupervised Bayesian detectors, which classify these estimates into noise/speaker classes and 3) new Bayesian filters, which use the speaker class estimates to track multiple speakers.

This framework was developed to tackle the low overlap detection rate of multiple speakers and to reduce the number of constraints generally imposed in standard solutions. II. Sequential neural estimation framework for language modeling, which overcomes some of the shortcomings of standard approaches through merging of different models in a hybrid architecture. That is, we introduce two solutions that tightly merge particular models and then show how a generalization can be achieved through a new mixture model. In order to speed-up the training of large vocabulary language models, we introduce a new extension of the noise contrastive estimation approach to batch training.

@phdthesis{Oualil2017b,
title = {Sequential estimation techniques and application to multiple speaker tracking and language modeling},
author = {Youssef Oualil},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:291-scidok-ds-272280},
doi = {https://doi.org/http://dx.doi.org/10.22028/D291-27228},
year = {2017},
date = {2017},
school = {Saarland University},
address = {Saarbruecken, Germany},
abstract = {For many real-word applications, the considered data is given as a time sequence that becomes available in an orderly fashion, where the order incorporates important information about the entities of interest. The work presented in this thesis deals with two such cases by introducing new sequential estimation solutions. More precisely, we introduce a: I. Sequential Bayesian estimation framework to solve the multiple speaker localization, detection and tracking problem. This framework is a complete pipeline that includes 1) new observation estimators, which extract a fixed number of potential locations per time frame; 2) new unsupervised Bayesian detectors, which classify these estimates into noise/speaker classes and 3) new Bayesian filters, which use the speaker class estimates to track multiple speakers. This framework was developed to tackle the low overlap detection rate of multiple speakers and to reduce the number of constraints generally imposed in standard solutions. II. Sequential neural estimation framework for language modeling, which overcomes some of the shortcomings of standard approaches through merging of different models in a hybrid architecture. That is, we introduce two solutions that tightly merge particular models and then show how a generalization can be achieved through a new mixture model. In order to speed-up the training of large vocabulary language models, we introduce a new extension of the noise contrastive estimation approach to batch training.},
pubstate = {published},
type = {phdthesis}
}

Copy BibTeX to Clipboard

Project:   B4

Oualil, Youssef; Klakow, Dietrich

A neural network approach for mixing language models Inproceedings

ICASSP 2017, 2017.

The performance of Neural Network (NN)-based language models is steadily improving due to the emergence of new architectures, which are able to learn different natural language characteristics. This paper presents a novel framework, which shows that a significant improvement can be achieved by combining different existing heterogeneous models in a single architecture. This is done through 1) a feature layer, which separately learns different NN-based models and 2) a mixture layer, which merges the resulting model features. In doing so, this architecture benefits from the learning capabilities of each model with no noticeable increase in the number of model parameters or the training time. Extensive experiments conducted on the Penn Treebank (PTB) and the Large Text Compression Benchmark (LTCB) corpus showed a significant reduction of the perplexity when compared to state-of-the-art feedforward as well as recurrent neural network architectures.

@inproceedings{Oualil2017b,
title = {A neural network approach for mixing language models},
author = {Youssef Oualil and Dietrich Klakow},
url = {https://arxiv.org/abs/1708.06989},
year = {2017},
date = {2017},
publisher = {ICASSP 2017},
abstract = {The performance of Neural Network (NN)-based language models is steadily improving due to the emergence of new architectures, which are able to learn different natural language characteristics. This paper presents a novel framework, which shows that a significant improvement can be achieved by combining different existing heterogeneous models in a single architecture. This is done through 1) a feature layer, which separately learns different NN-based models and 2) a mixture layer, which merges the resulting model features. In doing so, this architecture benefits from the learning capabilities of each model with no noticeable increase in the number of model parameters or the training time. Extensive experiments conducted on the Penn Treebank (PTB) and the Large Text Compression Benchmark (LTCB) corpus showed a significant reduction of the perplexity when compared to state-of-the-art feedforward as well as recurrent neural network architectures.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B4

Singh, Mittul; Oualil, Youssef; Klakow, Dietrich

Approximated and domain-adapted LSTM language models for first-pass decoding in speech recognition Inproceedings

18th Annual Conference of the International Speech Communication Association (INTERSPEECH), Stockholm, Sweden, 2017.

Traditionally, short-range Language Models (LMs) like the conventional n-gram models have been used for language model adaptation. Recent work has improved performance for such tasks using adapted long-span models like Recurrent Neural Network LMs (RNNLMs). With the first pass performed using a large background n-gram LM, the adapted RNNLMs are mostly used to rescore lattices or N-best lists, as a second step in the decoding process. Ideally, these adapted RNNLMs should be applied for first-pass decoding. Thus, we introduce two ways of applying adapted long-short-term-memory (LSTM) based RNNLMs for first-pass decoding. Using available techniques to convert LSTMs to approximated versions for first-pass decoding, we compare approximated LSTMs adapted in a Fast Marginal Adaptation framework (FMA) and an approximated version of architecture-based-adaptation of LSTM. On a conversational speech recognition task, these differently approximated and adapted LSTMs combined with a trigram LM outperform other adapted and unadapted LMs. Here, the architecture-adapted LSTM combination obtains a 35.9 % word error rate (WER) and is outperformed by FMA-based LSTM combination obtaining the overall lowest WER of 34.4 %

@inproceedings{Singh2017,
title = {Approximated and domain-adapted LSTM language models for first-pass decoding in speech recognition},
author = {Mittul Singh and Youssef Oualil and Dietrich Klakow},
url = {https://www.researchgate.net/publication/319185101_Approximated_and_Domain-Adapted_LSTM_Language_Models_for_First-Pass_Decoding_in_Speech_Recognition},
year = {2017},
date = {2017},
publisher = {18th Annual Conference of the International Speech Communication Association (INTERSPEECH)},
address = {Stockholm, Sweden},
abstract = {Traditionally, short-range Language Models (LMs) like the conventional n-gram models have been used for language model adaptation. Recent work has improved performance for such tasks using adapted long-span models like Recurrent Neural Network LMs (RNNLMs). With the first pass performed using a large background n-gram LM, the adapted RNNLMs are mostly used to rescore lattices or N-best lists, as a second step in the decoding process. Ideally, these adapted RNNLMs should be applied for first-pass decoding. Thus, we introduce two ways of applying adapted long-short-term-memory (LSTM) based RNNLMs for first-pass decoding. Using available techniques to convert LSTMs to approximated versions for first-pass decoding, we compare approximated LSTMs adapted in a Fast Marginal Adaptation framework (FMA) and an approximated version of architecture-based-adaptation of LSTM. On a conversational speech recognition task, these differently approximated and adapted LSTMs combined with a trigram LM outperform other adapted and unadapted LMs. Here, the architecture-adapted LSTM combination obtains a 35.9 % word error rate (WER) and is outperformed by FMA-based LSTM combination obtaining the overall lowest WER of 34.4 %},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B4

Klakow, Dietrich; Trost, Thomas

Parameter Free Hierarchical Graph-Based Clustering for Analyzing Continuous Word Embeddings Inproceedings

Proceedings of TextGraphs-11: Graph-based Methods for Natural Language Processing (Workshop at ACL 2017), Association for Computational Linguistics, pp. 30-38, Vancouver, Canada, 2017.

Word embeddings are high-dimensional vector representations of words and are thus difficult to interpret. In order to deal with this, we introduce an unsupervised parameter free method for creating a hierarchical graphical clustering of the full ensemble of word vectors and show that this structure is a geometrically meaningful representation of the original relations between the words. This newly obtained representation can be used for better understanding and thus improving the embedding algorithm and exhibits semantic meaning, so it can also be utilized in a variety of language processing tasks like categorization or measuring similarity.

@inproceedings{TroKla2017,
title = {Parameter Free Hierarchical Graph-Based Clustering for Analyzing Continuous Word Embeddings},
author = {Dietrich Klakow and Thomas Trost},
url = {https://aclanthology.org/W17-2404},
doi = {https://doi.org/10.18653/v1/W17-2404"},
year = {2017},
date = {2017},
booktitle = {Proceedings of TextGraphs-11: Graph-based Methods for Natural Language Processing (Workshop at ACL 2017)},
pages = {30-38},
publisher = {Association for Computational Linguistics},
address = {Vancouver, Canada},
abstract = {Word embeddings are high-dimensional vector representations of words and are thus difficult to interpret. In order to deal with this, we introduce an unsupervised parameter free method for creating a hierarchical graphical clustering of the full ensemble of word vectors and show that this structure is a geometrically meaningful representation of the original relations between the words. This newly obtained representation can be used for better understanding and thus improving the embedding algorithm and exhibits semantic meaning, so it can also be utilized in a variety of language processing tasks like categorization or measuring similarity.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B4

Lemke, Tyll Robin; Horch, Eva; Reich, Ingo

Optimal encoding! - Information Theory constrains article omission in newspaper headlines Inproceedings

Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, Association for Computational Linguistics, pp. 131-135, Valencia, Spain, 2017.

In this paper we pursue the hypothesis that the distribution of article omission specifically is constrained by principles of Information Theory (Shannon 1948). In particular, Information Theory predicts a stronger preference for article omission before nouns which are relatively unpredictable in context of the preceding words. We investigated article omission in German newspaper headlines with a corpus and acceptability rating study. Both support our hypothesis: Articles are inserted more often before unpredictable nouns and subjects perceive article omission before predictable nouns as more well-formed than before unpredictable ones. This suggests that information theoretic principles constrain the distribution of article omission in headlines.

@inproceedings{LemkeHorchReich:17,
title = {Optimal encoding! - Information Theory constrains article omission in newspaper headlines},
author = {Tyll Robin Lemke and Eva Horch and Ingo Reich},
url = {https://www.aclweb.org/anthology/E17-2021},
year = {2017},
date = {2017},
booktitle = {Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers},
pages = {131-135},
publisher = {Association for Computational Linguistics},
address = {Valencia, Spain},
abstract = {In this paper we pursue the hypothesis that the distribution of article omission specifically is constrained by principles of Information Theory (Shannon 1948). In particular, Information Theory predicts a stronger preference for article omission before nouns which are relatively unpredictable in context of the preceding words. We investigated article omission in German newspaper headlines with a corpus and acceptability rating study. Both support our hypothesis: Articles are inserted more often before unpredictable nouns and subjects perceive article omission before predictable nouns as more well-formed than before unpredictable ones. This suggests that information theoretic principles constrain the distribution of article omission in headlines.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B3

Lemke, Tyll Robin

Sentential or not? - An experimental investigation on the syntax of fragments Inproceedings

Proceedings of Linguistic Evidence 2016, Tübingen, 2017.
This paper presents four experiments on the syntactic structure of fragments, i.e. nonsentential utterances with propositional meaning and illocutionary force (Morgan, 1973). The experiments evaluate the predictions of two competing theories of fragments: Merchant’s (2004) movement and deletion account and Barton & Progovac’s (2005) nonsentential account. Experiment 1 provides evidence for case connectivity effects, this suggests that there is indeed unarticulated linguistic structure in fragments (unlike argued by Barton & Progovac 2005). Experiments 2-4 address a central prediction of the movement and deletion account: only those constituents which may appear in the left periphery are possible fragments. Merchant et al. (2013) present two studies on preposition stranding and complement clause topicalization in favor of this. My experiments 2-4 replicate and extend these studies in German and English. Taken together, the acceptability pattern predicted by Merchant (2004) holds only for the preposition stranding data (exp. 2), but not for complement clauses (exp.3) or German multiple prefield constituents (exp.4).

@inproceedings{Lemke-toappear,
title = {Sentential or not? - An experimental investigation on the syntax of fragments},
author = {Tyll Robin Lemke},
url = {https://publikationen.uni-tuebingen.de/xmlui/handle/10900/77657},
doi = {https://doi.org/10.15496/publikation-19058},
year = {2017},
date = {2017},
booktitle = {Proceedings of Linguistic Evidence 2016},
address = {T{\"u}bingen},
abstract = {

This paper presents four experiments on the syntactic structure of fragments, i.e. nonsentential utterances with propositional meaning and illocutionary force (Morgan, 1973). The experiments evaluate the predictions of two competing theories of fragments: Merchant's (2004) movement and deletion account and Barton & Progovac's (2005) nonsentential account. Experiment 1 provides evidence for case connectivity effects, this suggests that there is indeed unarticulated linguistic structure in fragments (unlike argued by Barton & Progovac 2005). Experiments 2-4 address a central prediction of the movement and deletion account: only those constituents which may appear in the left periphery are possible fragments. Merchant et al. (2013) present two studies on preposition stranding and complement clause topicalization in favor of this. My experiments 2-4 replicate and extend these studies in German and English. Taken together, the acceptability pattern predicted by Merchant (2004) holds only for the preposition stranding data (exp. 2), but not for complement clauses (exp.3) or German multiple prefield constituents (exp.4).
},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B3

Reich, Ingo

On the omission of articles and copulae in German newspaper headlines Journal Article

Linguistic Variation, 17, pp. 186-204, 2017.

This paper argues based on a corpus-linguistic study that both omitted articles and copulae in German headlines are to be treated as null elements NA and NC. Both items need to be licensed by a specific (parsing) strategy known as discourse orientation (Huang, 1984), which is also applicable in the special register of headlines. It is shown that distinguishing between discourse and sentence orientation and correlating these two strategies with λ-binding and existential quantification, respectively, naturally accounts for an asymmetry in article omission observed in Stowell (1991).

@article{Reich-inpress,
title = {On the omission of articles and copulae in German newspaper headlines},
author = {Ingo Reich},
url = {https://benjamins.com/catalog/lv.14017.rei},
doi = {https://doi.org/https://doi.org/10.1075/lv.14017.rei},
year = {2017},
date = {2017},
journal = {Linguistic Variation},
pages = {186-204},
volume = {17},
number = {2},
abstract = {

This paper argues based on a corpus-linguistic study that both omitted articles and copulae in German headlines are to be treated as null elements NA and NC. Both items need to be licensed by a specific (parsing) strategy known as discourse orientation (Huang, 1984), which is also applicable in the special register of headlines. It is shown that distinguishing between discourse and sentence orientation and correlating these two strategies with λ-binding and existential quantification, respectively, naturally accounts for an asymmetry in article omission observed in Stowell (1991).

},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   B3

Hoek, Jet; Scholman, Merel

Evaluating discourse annotation: Some recent insights and new approaches Inproceedings

Proceedings of the 13th Joint ISO-ACL Workshop on Interoperable Semantic Annotation (ISA-13), 2017.

Annotated data is an important resource for the linguistics community, which is why researchers need to be sure that such data are reliable. However, arriving at sufficiently reliable annotations appears to be an issue within the field of discourse, possibly due to the fact that coherence is a mental phenomenon rather than a textual one. In this paper, we discuss recent insights and developments regarding annotation and reliability evaluation that are relevant to the field of discourse. We focus on characteristics of coherence that impact reliability scores and look at how different measures are affected by this. We discuss benefits and disadvantages of these measures, and propose that discourse annotation results be accompanied by a detailed report of the annotation process and data, as well as a careful consideration of the reliability measure that is applied.

@inproceedings{hoek2017evaluating,
title = {Evaluating discourse annotation: Some recent insights and new approaches},
author = {Jet Hoek and Merel Scholman},
url = {https://aclanthology.org/W17-7401},
year = {2017},
date = {2017},
booktitle = {Proceedings of the 13th Joint ISO-ACL Workshop on Interoperable Semantic Annotation (ISA-13)},
abstract = {Annotated data is an important resource for the linguistics community, which is why researchers need to be sure that such data are reliable. However, arriving at sufficiently reliable annotations appears to be an issue within the field of discourse, possibly due to the fact that coherence is a mental phenomenon rather than a textual one. In this paper, we discuss recent insights and developments regarding annotation and reliability evaluation that are relevant to the field of discourse. We focus on characteristics of coherence that impact reliability scores and look at how different measures are affected by this. We discuss benefits and disadvantages of these measures, and propose that discourse annotation results be accompanied by a detailed report of the annotation process and data, as well as a careful consideration of the reliability measure that is applied.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B2

Shi, Wei; Yung, Frances Pik Yu; Rubino, Raphael; Demberg, Vera

Using explicit discourse connectives in translation for implicit discourse relation classification Inproceedings

Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Asian Federation of Natural Language Processing, pp. 484-495, Taipei, Taiwan, 2017.

Implicit discourse relation recognition is an extremely challenging task due to the lack of indicative connectives. Various neural network architectures have been proposed for this task recently, but most of them suffer from the shortage of labeled data. In this paper, we address this problem by procuring additional training data from parallel corpora: When humans translate a text, they sometimes add connectives (a process known as explicitation). We automatically back-translate it into an English connective and use it to infer a label with high confidence. We show that a training set several times larger than the original training set can be generated this way. With the extra labeled instances, we show that even a simple bidirectional Long Short-Term Memory Network can outperform the current state-of-the-art.

@inproceedings{Shi2017b,
title = {Using explicit discourse connectives in translation for implicit discourse relation classification},
author = {Wei Shi and Frances Pik Yu Yung and Raphael Rubino and Vera Demberg},
url = {https://aclanthology.org/I17-1049},
year = {2017},
date = {2017},
booktitle = {Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)},
pages = {484-495},
publisher = {Asian Federation of Natural Language Processing},
address = {Taipei, Taiwan},
abstract = {Implicit discourse relation recognition is an extremely challenging task due to the lack of indicative connectives. Various neural network architectures have been proposed for this task recently, but most of them suffer from the shortage of labeled data. In this paper, we address this problem by procuring additional training data from parallel corpora: When humans translate a text, they sometimes add connectives (a process known as explicitation). We automatically back-translate it into an English connective and use it to infer a label with high confidence. We show that a training set several times larger than the original training set can be generated this way. With the extra labeled instances, we show that even a simple bidirectional Long Short-Term Memory Network can outperform the current state-of-the-art.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B2

Successfully