Publications

Talamo, Luigi

Tweaking UD Annotations to Investigate the Placement of Determiners, Quantifiers and Numerals in the Noun Phrase Inproceedings

Vylomova, Ekaterina; Ponti, Edoardo; Cotterell, Ryan (Ed.): Proceedings of the 4th Workshop on Research in Computational Linguistic Typology and Multilingual NLP, Association for Computational Linguistics, pp. 36-41, Seattle, Washington, 2022.

We describe a methodology to extract with finer accuracy word order patterns from texts automatically annotated with Universal Dependency (UD) trained parsers. We use the methodology to quantify the word order entropy of determiners, quantifiers and numerals in ten Indo-European languages, using UD-parsed texts from a parallel corpus of prosaic texts. Our results suggest that the combinations of different UD annotation layers, such as UD Relations, Universal Parts of Speech and lemma, and the introduction of language-specific lists of closed-category lemmata has the two-fold effect of improving the quality of analysis and unveiling hidden areas of variability in word order patterns.

@inproceedings{Talamo_2022,
title = {Tweaking UD Annotations to Investigate the Placement of Determiners, Quantifiers and Numerals in the Noun Phrase},
author = {Luigi Talamo},
editor = {Ekaterina Vylomova and Edoardo Ponti and Ryan Cotterell},
url = {https://aclanthology.org/2022.sigtyp-1.5/},
doi = {https://doi.org/10.18653/v1/2022.sigtyp-1.5},
year = {2022},
date = {2022},
booktitle = {Proceedings of the 4th Workshop on Research in Computational Linguistic Typology and Multilingual NLP},
pages = {36-41},
publisher = {Association for Computational Linguistics},
address = {Seattle, Washington},
abstract = {We describe a methodology to extract with finer accuracy word order patterns from texts automatically annotated with Universal Dependency (UD) trained parsers. We use the methodology to quantify the word order entropy of determiners, quantifiers and numerals in ten Indo-European languages, using UD-parsed texts from a parallel corpus of prosaic texts. Our results suggest that the combinations of different UD annotation layers, such as UD Relations, Universal Parts of Speech and lemma, and the introduction of language-specific lists of closed-category lemmata has the two-fold effect of improving the quality of analysis and unveiling hidden areas of variability in word order patterns.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   C7

Jesujoba , Alabi; Adelani, David; Mosbach, Marius; Klakow, Dietrich

Adapting Pre-trained Language Models to African Languages via Multilingual Adaptive Fine-Tuning Inproceedings

Proceedings of the 29th International Conference on Computational Linguistics, International Committee on Computational Linguistics, pp. 4336-4349, Gyeongju, Republic of Korea, 2022.

Multilingual pre-trained language models (PLMs) have demonstrated impressive performance on several downstream tasks for both high-resourced and low-resourced languages. However, there is still a large performance drop for languages unseen during pre-training, especially African languages. One of the most effective approaches to adapt to a new language is language adaptive fine-tuning (LAFT) {—} fine-tuning a multilingual PLM on monolingual texts of a language using the pre-training objective. However, adapting to target language individually takes large disk space and limits the cross-lingual transfer abilities of the resulting models because they have been specialized for a single language. In this paper, we perform multilingual adaptive fine-tuning on 17 most-resourced African languages and three other high-resource languages widely spoken on the African continent to encourage cross-lingual transfer learning. To further specialize the multilingual PLM, we removed vocabulary tokens from the embedding layer that corresponds to non-African writing scripts before MAFT, thus reducing the model size by around 50{\%}. Our evaluation on two multilingual PLMs (AfriBERTa and XLM-R) and three NLP tasks (NER, news topic classification, and sentiment classification) shows that our approach is competitive to applying LAFT on individual languages while requiring significantly less disk space. Additionally, we show that our adapted PLM also improves the zero-shot cross-lingual transfer abilities of parameter efficient fine-tuning methods.

@inproceedings{alabi-etal-2022-adapting,
title = {Adapting Pre-trained Language Models to African Languages via Multilingual Adaptive Fine-Tuning},
author = {Alabi Jesujoba and David Adelani and Marius Mosbach and Dietrich Klakow},
url = {https://aclanthology.org/2022.coling-1.382},
year = {2022},
date = {2022},
booktitle = {Proceedings of the 29th International Conference on Computational Linguistics},
pages = {4336-4349},
publisher = {International Committee on Computational Linguistics},
address = {Gyeongju, Republic of Korea},
abstract = {Multilingual pre-trained language models (PLMs) have demonstrated impressive performance on several downstream tasks for both high-resourced and low-resourced languages. However, there is still a large performance drop for languages unseen during pre-training, especially African languages. One of the most effective approaches to adapt to a new language is language adaptive fine-tuning (LAFT) {---} fine-tuning a multilingual PLM on monolingual texts of a language using the pre-training objective. However, adapting to target language individually takes large disk space and limits the cross-lingual transfer abilities of the resulting models because they have been specialized for a single language. In this paper, we perform multilingual adaptive fine-tuning on 17 most-resourced African languages and three other high-resource languages widely spoken on the African continent to encourage cross-lingual transfer learning. To further specialize the multilingual PLM, we removed vocabulary tokens from the embedding layer that corresponds to non-African writing scripts before MAFT, thus reducing the model size by around 50{\%}. Our evaluation on two multilingual PLMs (AfriBERTa and XLM-R) and three NLP tasks (NER, news topic classification, and sentiment classification) shows that our approach is competitive to applying LAFT on individual languages while requiring significantly less disk space. Additionally, we show that our adapted PLM also improves the zero-shot cross-lingual transfer abilities of parameter efficient fine-tuning methods.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B4

Lapshinova-Koltunski, Ekaterina; Pollkläsener, Christina; Przybyl, Heike

Exploring Explicitation and Implicitation in Parallel Interpreting and Translation Corpora Journal Article

The Prague Bulletin of Mathematical Linguistics, 119, pp. 5-22, 2022, ISSN 0032-6585.

We present a study of discourse connectives in English-German and German-English translation and interpreting where we focus on the phenomena of explicitation and implicitation.
Apart from distributional analysis of translation patterns in parallel data, we also look into surprisal, i.e. an information-theoretic measure of cognitive effort, which helps us to interpret the observed tendencies.

@article{lapshinova-koltunski-pollklaesener-przybyl:2022,
title = {Exploring Explicitation and Implicitation in Parallel Interpreting and Translation Corpora},
author = {Ekaterina Lapshinova-Koltunski and Christina Pollkl{\"a}sener and Heike Przybyl},
url = {https://ufal.mff.cuni.cz/pbml/119/art-lapshinova-koltunski-pollklaesener-przybyl.pdf},
doi = {https://doi.org/10.14712/00326585.020},
year = {2022},
date = {2022},
journal = {The Prague Bulletin of Mathematical Linguistics},
pages = {5-22},
volume = {119},
abstract = {We present a study of discourse connectives in English-German and German-English translation and interpreting where we focus on the phenomena of explicitation and implicitation. Apart from distributional analysis of translation patterns in parallel data, we also look into surprisal, i.e. an information-theoretic measure of cognitive effort, which helps us to interpret the observed tendencies.},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   B7

Kudera, Jacek; Stenger, Irina; Möbius, Bernd; Avgustinova, Tania; Klakow, Dietrich

Phonetic cues in auditory identification of Bulgarian, Czech, Polish, and Russian language of origin Journal Article

Language and Speech, 2022.

This work presents the results of an auditory language of origin identification experiment. Disyllabic and trisyllabic logatomes were recorded by speakers of Bulgarian, Czech, Polish, and Russian, and presented to L1 speakers of the abovementioned Slavic languages. The goals of the test were to verify the ability of lay listeners to recognize the linguistic origin of speakers, based on spoken samples with limited segmental and suprasegmental information, and to correlate the signal features with the subjects’ performance. It was found that position of word stress is not an important predictor in language recognition. However, inherent vowel characteristics such as duration and vowel space computed by the means of Pillai scores correlate with subjects’ performance. Both the linguistic profile and the familiarity with closely related languages also appear to be relevant predictors of listeners’ performance. Finally, the information-theoretic notion of surprisal applied on regular cross-linguistic sound correspondences was correlated with recognition scores; though, the correlations did not reach the threshold of statistical significance. We conclude that auditory identification of linguistic origin by lay persons, native speakers of closely related languages, is possible even when exposed to limited segmental information, which can serve as a cue in the identification of linguistic origin.

@article{kudera_etal2022_cues,
title = {Phonetic cues in auditory identification of Bulgarian, Czech, Polish, and Russian language of origin},
author = {Jacek Kudera and Irina Stenger and Bernd M{\"o}bius and Tania Avgustinova and Dietrich Klakow},
url = {https://journals.sagepub.com/eprint/JJIKHP9RPEYZM2EQKFWZ/full},
doi = {https://doi.org/10.1177/00238309221119098},
year = {2022},
date = {2022-09-01},
journal = {Language and Speech},
abstract = {This work presents the results of an auditory language of origin identification experiment. Disyllabic and trisyllabic logatomes were recorded by speakers of Bulgarian, Czech, Polish, and Russian, and presented to L1 speakers of the abovementioned Slavic languages. The goals of the test were to verify the ability of lay listeners to recognize the linguistic origin of speakers, based on spoken samples with limited segmental and suprasegmental information, and to correlate the signal features with the subjects’ performance. It was found that position of word stress is not an important predictor in language recognition. However, inherent vowel characteristics such as duration and vowel space computed by the means of Pillai scores correlate with subjects’ performance. Both the linguistic profile and the familiarity with closely related languages also appear to be relevant predictors of listeners’ performance. Finally, the information-theoretic notion of surprisal applied on regular cross-linguistic sound correspondences was correlated with recognition scores; though, the correlations did not reach the threshold of statistical significance. We conclude that auditory identification of linguistic origin by lay persons, native speakers of closely related languages, is possible even when exposed to limited segmental information, which can serve as a cue in the identification of linguistic origin.},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   C4

Zhang, Miaoran; Mosbach, Marius; Adelani, David; Hedderich, Michael; Klakow, Dietrich

MCSE: Multimodal Contrastive Learning of Sentence Embeddings Inproceedings

Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics, pp. 5959-5969, Seattle, United States, 2022.

Learning semantically meaningful sentence embeddings is an open problem in natural language processing. In this work, we propose a sentence embedding learning approach that exploits both visual and textual information via a multimodal contrastive objective. Through experiments on a variety of semantic textual similarity tasks, we demonstrate that our approach consistently improves the performance across various datasets and pre-trained encoders. In particular, combining a small amount of multimodal data with a large text-only corpus, we improve the state-of-the-art average Spearman{‚}s correlation by 1.7{\%}. By analyzing the properties of the textual embedding space, we show that our model excels in aligning semantically similar sentences, providing an explanation for its improved performance.

@inproceedings{zhang-etal-2022-mcse,
title = {MCSE: Multimodal Contrastive Learning of Sentence Embeddings},
author = {Miaoran Zhang and Marius Mosbach and David Adelani and Michael Hedderich and Dietrich Klakow},
url = {https://aclanthology.org/2022.naacl-main.436},
doi = {https://doi.org/10.18653/v1/2022.naacl-main.436},
year = {2022},
date = {2022},
booktitle = {Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
pages = {5959-5969},
publisher = {Association for Computational Linguistics},
address = {Seattle, United States},
abstract = {Learning semantically meaningful sentence embeddings is an open problem in natural language processing. In this work, we propose a sentence embedding learning approach that exploits both visual and textual information via a multimodal contrastive objective. Through experiments on a variety of semantic textual similarity tasks, we demonstrate that our approach consistently improves the performance across various datasets and pre-trained encoders. In particular, combining a small amount of multimodal data with a large text-only corpus, we improve the state-of-the-art average Spearman{'}s correlation by 1.7{\%}. By analyzing the properties of the textual embedding space, we show that our model excels in aligning semantically similar sentences, providing an explanation for its improved performance.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B4

Abdullah, Badr M.; Klakow, Dietrich

Analyzing the Representational Geometry of Acoustic Word Embeddings Inproceedings

Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, Association for Computational Linguistics, pp. 178-191, Abu Dhabi, United Arab Emirates (Hybrid), 2022.

Acoustic word embeddings (AWEs) are fixed-dimensionality vector representations in a vector space such that different acoustic exemplars of the same word are projected nearby in the embedding space. In addition to their use in speech technology applications such as spoken term discovery and keyword spotting, AWE models have been adopted as models of spoken-word processing in several cognitively motivated studies and they have shown to exhibit a human-like performance in some auditory processing tasks. Nevertheless, the representation geometry of AWEs remains an under-explored topic that has not been studied in the literature. In this paper, we take a closer analytical look at AWEs and study how the choice of the learning objective and the architecture shapes their representational profile. Our main findings highlight the prominent role of the learning objective on the representational geometry over the architecture.

@inproceedings{abdullah-klakow-2022-analyzing,
title = {Analyzing the Representational Geometry of Acoustic Word Embeddings},
author = {Badr M. Abdullah and Dietrich Klakow},
url = {https://aclanthology.org/2022.blackboxnlp-1.15},
doi = {https://doi.org/10.18653/v1/2022.blackboxnlp-1.15},
year = {2022},
date = {2022},
booktitle = {Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP},
pages = {178-191},
publisher = {Association for Computational Linguistics},
address = {Abu Dhabi, United Arab Emirates (Hybrid)},
abstract = {Acoustic word embeddings (AWEs) are fixed-dimensionality vector representations in a vector space such that different acoustic exemplars of the same word are projected nearby in the embedding space. In addition to their use in speech technology applications such as spoken term discovery and keyword spotting, AWE models have been adopted as models of spoken-word processing in several cognitively motivated studies and they have shown to exhibit a human-like performance in some auditory processing tasks. Nevertheless, the representation geometry of AWEs remains an under-explored topic that has not been studied in the literature. In this paper, we take a closer analytical look at AWEs and study how the choice of the learning objective and the architecture shapes their representational profile. Our main findings highlight the prominent role of the learning objective on the representational geometry over the architecture.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   C4

Ibrahim, Omnia; Yuen, Ivan; van Os, Marjolein; Andreeva, Bistra; Möbius, Bernd

The combined effects of contextual predictability and noise on the acoustic realisation of German syllables Journal Article

The Journal of the Acoustical Society of America, 152, 2022.

Speakers tend to speak clearly in noisy environments, while they tend to reserve effort by shortening word duration in predictable contexts. It is unclear how these two communicative demands are met. The current study investigates the acoustic realizations of syllables in predictable vs unpredictable contexts across different background noise levels. Thirty-eight German native speakers produced 60 CV syllables in two predictability contexts in three noise conditions (reference = quiet, 0 dB and −10 dB signal-to-noise ratio). Duration, intensity (average and range), F0 (median), and vowel formants of the target syllables were analysed. The presence of noise yielded significantly longer duration, higher average intensity, larger intensity range, and higher F0. Noise levels affected intensity (average and range) and F0. Low predictability syllables exhibited longer duration and larger intensity range. However, no interaction was found between noise and predictability. This suggests that noise-related modifications might be independent of predictability-related changes, with implications for including channel-based and message-based formulations in speech production.

@article{ibrahim_etal_jasa2022,
title = {The combined effects of contextual predictability and noise on the acoustic realisation of German syllables},
author = {Omnia Ibrahim and Ivan Yuen and Marjolein van Os and Bistra Andreeva and Bernd M{\"o}bius},
url = {https://asa.scitation.org/doi/10.1121/10.0013413},
doi = {https://doi.org/10.1121/10.0013413},
year = {2022},
date = {2022-08-10},
journal = {The Journal of the Acoustical Society of America},
volume = {152},
number = {2},
abstract = {Speakers tend to speak clearly in noisy environments, while they tend to reserve effort by shortening word duration in predictable contexts. It is unclear how these two communicative demands are met. The current study investigates the acoustic realizations of syllables in predictable vs unpredictable contexts across different background noise levels. Thirty-eight German native speakers produced 60 CV syllables in two predictability contexts in three noise conditions (reference = quiet, 0 dB and −10 dB signal-to-noise ratio). Duration, intensity (average and range), F0 (median), and vowel formants of the target syllables were analysed. The presence of noise yielded significantly longer duration, higher average intensity, larger intensity range, and higher F0. Noise levels affected intensity (average and range) and F0. Low predictability syllables exhibited longer duration and larger intensity range. However, no interaction was found between noise and predictability. This suggests that noise-related modifications might be independent of predictability-related changes, with implications for including channel-based and message-based formulations in speech production.},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Projects:   C1 A4

Bhandari, Pratik; Demberg, Vera; Kray, Jutta

Predictability effects in degraded speech comprehension are reduced as a function of attention Journal Article

Language and Cognition, Cambridge University Press, pp. 1-18, 2022.

The aim of this study was to examine the role of attention in understanding linguistic information even in a noisy environment. To assess the role of attention, we varied task instructions in two experiments in which participants were instructed to listen to short sentences and thereafter to type in the last word they heard or to type in the whole sentence. We were interested in how these task instructions influence the interplay between top-down prediction and bottom-up perceptual processes during language comprehension. Therefore, we created sentences that varied in the degree of predictability (low, medium, and high) as well as in the degree of speech degradation (four, six, and eight noise-vocoding channels). Results indicated better word recognition for highly predictable sentences for moderate, though not for high, levels of speech degradation, but only when attention was directed to the whole sentence. This underlines the important role of attention in language comprehension.

@article{bhandari_demberg_kray_2022,
title = {Predictability effects in degraded speech comprehension are reduced as a function of attention},
author = {Pratik Bhandari and Vera Demberg and Jutta Kray},
url = {https://www.cambridge.org/core/journals/language-and-cognition/article/abs/predictability-effects-in-degraded-speech-comprehension-are-reduced-as-a-function-of-attention/98F4E3A4A3FC0B7E00C8E1536D986853},
doi = {https://doi.org/10.1017/langcog.2022.16},
year = {2022},
date = {2022-07-22},
journal = {Language and Cognition},
pages = {1-18},
publisher = {Cambridge University Press},
abstract = {The aim of this study was to examine the role of attention in understanding linguistic information even in a noisy environment. To assess the role of attention, we varied task instructions in two experiments in which participants were instructed to listen to short sentences and thereafter to type in the last word they heard or to type in the whole sentence. We were interested in how these task instructions influence the interplay between top-down prediction and bottom-up perceptual processes during language comprehension. Therefore, we created sentences that varied in the degree of predictability (low, medium, and high) as well as in the degree of speech degradation (four, six, and eight noise-vocoding channels). Results indicated better word recognition for highly predictable sentences for moderate, though not for high, levels of speech degradation, but only when attention was directed to the whole sentence. This underlines the important role of attention in language comprehension.},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   A4

Przybyl, Heike; Lapshinova-Koltunski, Ekaterina; Menzel, Katrin; Fischer, Stefan; Teich, Elke

EPIC UdS - Creation and applications of a simultaneous interpreting corpus Inproceedings

Proceedings of the  13th Conference on Language Resources and Evaluation (LREC 2022), pp. 1193–1200, Marseille, France, 20-25 June 2022, 2022.

In this paper, we describe the creation and annotation of EPIC UdS, a multilingual corpus of simultaneous interpreting for English, German and Spanish. We give an overview of the comparable and parallel, aligned corpus variants and explore various applications of the corpus. What makes EPIC UdS relevant is that it is one of the rare interpreting corpora that includes transcripts suitable for research on more than one language pair and on interpreting with regard to German. It not only contains transcribed speeches, but also rich metadata and fine-grained linguistic annotations tailored for diverse applications across a broad range of linguistic subfields.

@inproceedings{Przybyl_interpreting_2022,
title = {EPIC UdS - Creation and applications of a simultaneous interpreting corpus},
author = {Heike Przybyl and Ekaterina Lapshinova-Koltunski and Katrin Menzel and Stefan Fischer and Elke Teich},
url = {https://aclanthology.org/2022.lrec-1.127/},
year = {2022},
date = {2022},
booktitle = {Proceedings of the  13th Conference on Language Resources and Evaluation (LREC 2022)},
pages = {1193–1200},
address = {Marseille, France, 20-25 June 2022},
abstract = {In this paper, we describe the creation and annotation of EPIC UdS, a multilingual corpus of simultaneous interpreting for English, German and Spanish. We give an overview of the comparable and parallel, aligned corpus variants and explore various applications of the corpus. What makes EPIC UdS relevant is that it is one of the rare interpreting corpora that includes transcripts suitable for research on more than one language pair and on interpreting with regard to German. It not only contains transcribed speeches, but also rich metadata and fine-grained linguistic annotations tailored for diverse applications across a broad range of linguistic subfields.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B7

Stenger, Irina; Georgis, Philip; Avgustinova, Tania; Möbius, Bernd; Klakow, Dietrich

Modeling the Impact of Syntactic Distance and Surprisal on Cross-Slavic Text Comprehension Inproceedings

Proceedings of the Language Resources and Evaluation Conference, European Language Resources Association, pp. 7368-7376, Marseille, France, 2022.

We focus on the syntactic variation and measure syntactic distances between nine Slavic languages (Belarusian, Bulgarian, Croatian, Czech, Polish, Slovak, Slovene, Russian, and Ukrainian) using symmetric measures of insertion, deletion and movement of syntactic units in the parallel sentences of the fable „The North Wind and the Sun“. Additionally, we investigate phonetic and orthographic asymmetries between selected languages by means of the information theoretical notion of surprisal. Syntactic distance and surprisal are, thus, considered as potential predictors of mutual intelligibility between related languages. In spoken and written cloze test experiments for Slavic native speakers, the presented predictors will be validated as to whether variations in syntax lead to a slower or impeded intercomprehension of Slavic texts.

@inproceedings{stenger-EtAl:2022:LREC,
title = {Modeling the Impact of Syntactic Distance and Surprisal on Cross-Slavic Text Comprehension},
author = {Irina Stenger and Philip Georgis and Tania Avgustinova and Bernd M{\"o}bius and Dietrich Klakow},
url = {https://aclanthology.org/2022.lrec-1.802},
year = {2022},
date = {2022-06-21},
booktitle = {Proceedings of the Language Resources and Evaluation Conference},
pages = {7368-7376},
publisher = {European Language Resources Association},
address = {Marseille, France},
abstract = {We focus on the syntactic variation and measure syntactic distances between nine Slavic languages (Belarusian, Bulgarian, Croatian, Czech, Polish, Slovak, Slovene, Russian, and Ukrainian) using symmetric measures of insertion, deletion and movement of syntactic units in the parallel sentences of the fable "The North Wind and the Sun". Additionally, we investigate phonetic and orthographic asymmetries between selected languages by means of the information theoretical notion of surprisal. Syntactic distance and surprisal are, thus, considered as potential predictors of mutual intelligibility between related languages. In spoken and written cloze test experiments for Slavic native speakers, the presented predictors will be validated as to whether variations in syntax lead to a slower or impeded intercomprehension of Slavic texts.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   C4

Ortmann, Katrin

Fine-Grained Error Analysis and Fair Evaluation of Labeled Spans Inproceedings

Proceedings of the Language Resources and Evaluation Conference (LREC), European Language Resources Association, pp. 1400-1407, Marseille, France, 2022.

The traditional evaluation of labeled spans with precision, recall, and F1-score has undesirable effects due to double penalties. Annotations with incorrect label or boundaries count as two errors instead of one, despite being closer to the target annotation than false positives or false negatives. In this paper, new error types are introduced, which more accurately reflect true annotation quality and ensure that every annotation counts only once. An algorithm for error identification in flat and multi-level annotations is presented and complemented with a proposal on how to calculate meaningful precision, recall, and F1-scores based on the more fine-grained error types. The exemplary application to three different annotation tasks (NER, chunking, parsing) shows that the suggested procedure not only prevents double penalties but also allows for a more detailed error analysis, thereby providing more insight into the actual weaknesses of a system.

@inproceedings{ortmann2022,
title = {Fine-Grained Error Analysis and Fair Evaluation of Labeled Spans},
author = {Katrin Ortmann},
url = {https://aclanthology.org/2022.lrec-1.150},
year = {2022},
date = {2022-06-21},
booktitle = {Proceedings of the Language Resources and Evaluation Conference (LREC)},
pages = {1400-1407},
publisher = {European Language Resources Association},
address = {Marseille, France},
abstract = {The traditional evaluation of labeled spans with precision, recall, and F1-score has undesirable effects due to double penalties. Annotations with incorrect label or boundaries count as two errors instead of one, despite being closer to the target annotation than false positives or false negatives. In this paper, new error types are introduced, which more accurately reflect true annotation quality and ensure that every annotation counts only once. An algorithm for error identification in flat and multi-level annotations is presented and complemented with a proposal on how to calculate meaningful precision, recall, and F1-scores based on the more fine-grained error types. The exemplary application to three different annotation tasks (NER, chunking, parsing) shows that the suggested procedure not only prevents double penalties but also allows for a more detailed error analysis, thereby providing more insight into the actual weaknesses of a system.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   C6

Menzel, Katrin; Krielke, Marie-Pauline; Degaetano-Ortlieb, Stefania

Synthetic and analytic adjective negation in English scientific journal articles: A diachronic perspective Journal Article

LEGE ARTIS: Language yesterday, today, tomorrow, VII, Trnava: University of SS Cyril and Methodius in Trnava, pp. 157-213, 2022, ISSN 2453-8035 .

This paper addresses the development of synthetic and analytic adjective negation in a corpus of English scientific articles from the mid-17th century towards the end of the 20th century. Analytic patterns of adjective negation are found to become less frequent in the language of scientific articles, but more conventionalised in their textual contexts. Conversely, prefixed negated adjectives are identified as more frequent and more diverse with regard to their contexts.

@article{menzel_2022_diachronicperspective,
title = {Synthetic and analytic adjective negation in English scientific journal articles: A diachronic perspective},
author = {Katrin Menzel and Marie-Pauline Krielke and Stefania Degaetano-Ortlieb},
url = {https://www.researchgate.net/publication/361099180_Synthetic_and_analytic_adjective_negation_in_English_scientific_journal_articles_A_diachronic_perspective},
year = {2022},
date = {2022},
journal = {LEGE ARTIS: Language yesterday, today, tomorrow},
pages = {157-213},
publisher = {Trnava: University of SS Cyril and Methodius in Trnava},
volume = {VII},
number = {1},
abstract = {This paper addresses the development of synthetic and analytic adjective negation in a corpus of English scientific articles from the mid-17th century towards the end of the 20th century. Analytic patterns of adjective negation are found to become less frequent in the language of scientific articles, but more conventionalised in their textual contexts. Conversely, prefixed negated adjectives are identified as more frequent and more diverse with regard to their contexts.},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   B1

Scholman, Merel; Blything, Liam; Cain, Kate; Evers-Vermeul, Jacqueline

Discourse Rules:The Effects of Clause Order Principles on the Reading Process Journal Article

Language, Cognition and Neuroscience, 37(10), pp. 1277-1291, 2022, ISSN 2327-3798 .

In an eye-tracking-while-reading study, we investigated adult monolinguals’ (N=80) processing of two-clause sentences embedded in short narratives. Three principles theorized to guide comprehension of complex sentences were contrasted: one operating at the clause level, namely clause structure (main clause – subordinate clause or vice versa), and two operating at the discourse-level, namely givenness (given-new vs. new-given) and event order (chronological vs. reverse order). The results indicate that clause structure mainly affects early stages of processing, whereas the two principles operating at the discourse level are more important during later stages and for reading times of the entire sentence. Event order was found to operate relatively independently of the other principles. Givenness was found to overrule clause structure, a phenomenon that can be related to the grounding function of preposed subordinate clauses. We propose a new principle to reflect this interaction effect: the grounding principle.

@article{Merel_Rules_2022,
title = {Discourse Rules:The Effects of Clause Order Principles on the Reading Process},
author = {Merel Scholman and Liam Blything and Kate Cain and Jacqueline Evers-Vermeul},
url = {https://www.tandfonline.com/doi/full/10.1080/23273798.2022.2077971},
doi = {https://doi.org/10.1080/23273798.2022.2077971},
year = {2022},
date = {2022},
journal = {Language, Cognition and Neuroscience},
pages = {1277-1291},
volume = {37(10)},
abstract = {In an eye-tracking-while-reading study, we investigated adult monolinguals’ (N=80) processing of two-clause sentences embedded in short narratives. Three principles theorized to guide comprehension of complex sentences were contrasted: one operating at the clause level, namely clause structure (main clause - subordinate clause or vice versa), and two operating at the discourse-level, namely givenness (given-new vs. new-given) and event order (chronological vs. reverse order). The results indicate that clause structure mainly affects early stages of processing, whereas the two principles operating at the discourse level are more important during later stages and for reading times of the entire sentence. Event order was found to operate relatively independently of the other principles. Givenness was found to overrule clause structure, a phenomenon that can be related to the grounding function of preposed subordinate clauses. We propose a new principle to reflect this interaction effect: the grounding principle.},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   B2

Kravtchenko, Ekaterina; Demberg, Vera

Informationally redundant utterances elicit pragmatic inferences Journal Article

Cognition, 225, pp. 105159, 2022, ISSN 0010-0277.

Most theories of pragmatics and language processing predict that speakers avoid excessive informational redundancy. Informationally redundant utterances are, however, quite common in natural dialogue. From a comprehension standpoint, it remains unclear how comprehenders interpret these utterances, and whether they make attempts to reconcile the ‚dips‘ in informational utility with expectations of ‚appropriate‘ or ‚rational‘ speaker informativity. We show that informationally redundant (overinformative) utterances can trigger pragmatic inferences that increase utterance utility in line with comprehender expectations. In a series of three studies, we look at utterances which refer to stereotyped event sequences describing common activities (scripts). When comprehenders encounter utterances describing events that can be easily inferred from prior context, they interpret them as signifying that the event conveys new, unstated information (i.e. an event otherwise assumed to be habitual, such as paying the cashier when shopping, is reinterpreted as non-habitual). We call these inferences atypicality inferences. Further, we show that the degree to which these atypicality inferences are triggered depends on the framing of the utterance. In the absence of an exclamation mark or a discourse marker indicating the speaker’s specific intent to communicate the given information, such inferences are far less likely to arise. Overall, the results demonstrate that excessive conceptual redundancy leads to comprehenders revising the conversational common ground, in an effort to accommodate unexpected dips in informational utility.

@article{Kravtchenko_redundant_2022,
title = {Informationally redundant utterances elicit pragmatic inferences},
author = {Ekaterina Kravtchenko and Vera Demberg},
url = {https://www.sciencedirect.com/science/article/pii/S0010027722001470},
doi = {https://doi.org/ 10.1016/j.cognition.2022.105159},
year = {2022},
date = {2022},
journal = {Cognition},
pages = {105159},
volume = {225},
abstract = {Most theories of pragmatics and language processing predict that speakers avoid excessive informational redundancy. Informationally redundant utterances are, however, quite common in natural dialogue. From a comprehension standpoint, it remains unclear how comprehenders interpret these utterances, and whether they make attempts to reconcile the 'dips' in informational utility with expectations of 'appropriate' or 'rational' speaker informativity. We show that informationally redundant (overinformative) utterances can trigger pragmatic inferences that increase utterance utility in line with comprehender expectations. In a series of three studies, we look at utterances which refer to stereotyped event sequences describing common activities (scripts). When comprehenders encounter utterances describing events that can be easily inferred from prior context, they interpret them as signifying that the event conveys new, unstated information (i.e. an event otherwise assumed to be habitual, such as paying the cashier when shopping, is reinterpreted as non-habitual). We call these inferences atypicality inferences. Further, we show that the degree to which these atypicality inferences are triggered depends on the framing of the utterance. In the absence of an exclamation mark or a discourse marker indicating the speaker's specific intent to communicate the given information, such inferences are far less likely to arise. Overall, the results demonstrate that excessive conceptual redundancy leads to comprehenders revising the conversational common ground, in an effort to accommodate unexpected dips in informational utility.},
keywords = {Accommodation; Context-dependent implicatures; Experimental pragmatics; Psycholinguistics; Redundancy},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   A3

Sommerfeld, Linda; Staudte, Maria; Kray, Jutta

Ratings of name agreement and semantic categorization of 247 colored clipart pictures by young German children Journal Article

Acta Psychologica, 226, pp. 103558, 2022, ISSN 0001-6918.

Developmental and longitudinal studies with children increasingly use pictorial stimuli in cognitive, psychologic, and psycholinguistic research. To enhance validity and comparability within and across those studies, the use of normed pictures is recommended. Besides, creating picture sets and evaluating them in rating studies is very time consuming, in particular regarding samples of young children in which testing time is rather limited. As there is an increasing number of studies that investigate young German children’s semantic language processing with colored clipart stimuli, this work provides a first set of 247 colored cliparts with ratings of German native speaking children aged 4 to 6 years. We assessed two central rating aspects of pictures: Name agreement (Do pictures elicit the intended name of an object?) and semantic categorization (Are objects classified as members of the intended semantic category?). Our ratings indicate that children are proficient in naming and even better in semantic categorization of objects, whereas both seems to improve with increasing age of young childhood. Finally, this paper discusses some features of pictorial objects that might be important for children’s name agreement and semantic categorization and could be considered in future picture rating studies.

 

@article{Sommerfeld_of_2022,
title = {Ratings of name agreement and semantic categorization of 247 colored clipart pictures by young German children},
author = {Linda Sommerfeld and Maria Staudte and Jutta Kray},
url = {https://www.sciencedirect.com/science/article/pii/S0001691822000737},
doi = {https://doi.org/https://doi.org/10.1016/j.actpsy.2022.103558},
year = {2022},
date = {2022},
journal = {Acta Psychologica},
pages = {103558},
volume = {226},
abstract = {Developmental and longitudinal studies with children increasingly use pictorial stimuli in cognitive, psychologic, and psycholinguistic research. To enhance validity and comparability within and across those studies, the use of normed pictures is recommended. Besides, creating picture sets and evaluating them in rating studies is very time consuming, in particular regarding samples of young children in which testing time is rather limited. As there is an increasing number of studies that investigate young German children's semantic language processing with colored clipart stimuli, this work provides a first set of 247 colored cliparts with ratings of German native speaking children aged 4 to 6 years. We assessed two central rating aspects of pictures: Name agreement (Do pictures elicit the intended name of an object?) and semantic categorization (Are objects classified as members of the intended semantic category?). Our ratings indicate that children are proficient in naming and even better in semantic categorization of objects, whereas both seems to improve with increasing age of young childhood. Finally, this paper discusses some features of pictorial objects that might be important for children's name agreement and semantic categorization and could be considered in future picture rating studies.},
keywords = {Name agreement, Semantic categorization, Picture naming, Picture ratings, Children, Age differences},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   A5

Höltje, Gerrit; Mecklinger, Axel

Benefits and costs of predictive processing: How sentential constraint and word expectedness affect memory formation Journal Article

Brain Research, pp. 147942, 2022, ISSN 0006-8993.

This study investigated how the strength of schema support provided by strongly (SC) and weakly constraining (WC) sentences affects the encoding of expected and unexpected words, and how this is reflected in event-related potentials (ERPs). In a surprise recognition memory test, words studied on the previous day were presented together with new words and lures that were expected but not presented in the study phase. ERPs recorded in the study phase were compared for subsequently remembered and forgotten words. Better memory performance for expected over unexpected words was electrophysiologically supported by a parietal subsequent memory effect (SME) reflecting enhanced item-specific encoding of contextually expected words. SC sentences not only facilitated the semantic integration of sentence-ending words, as reflected in reduced N400 amplitudes, but also enabled the rapid successful encoding of these words into memory, which is evidenced by an SC > WC pattern in memory performance and correlations between pre- and post-stimulus SMEs for SC sentences. In contrast, words processed in WC sentence contexts necessitated sustained elaborative encoding processes as reflected in a late frontal slow wave SME. Expected but not presented words were associated with high rates of false positive memory decisions, indicating that these words remained in a state of high accessibility in memory even one day after the study phase. These mnemonic costs of predictive processing were more pronounced for expected words from SC sentences than from WC sentences and could reflect the lingering of strong semantic predictions which were associated with the pre-updating of sentence representations.

@article{Höltje_and_2022,
title = {Benefits and costs of predictive processing: How sentential constraint and word expectedness affect memory formation},
author = {Gerrit H{\"o}ltje and Axel Mecklinger},
url = {https://www.sciencedirect.com/science/article/abs/pii/S0006899322001664},
doi = {https://doi.org/10.1016/j.brainres.2022.147942},
year = {2022},
date = {2022},
journal = {Brain Research},
pages = {147942},
number = {1788},
abstract = {This study investigated how the strength of schema support provided by strongly (SC) and weakly constraining (WC) sentences affects the encoding of expected and unexpected words, and how this is reflected in event-related potentials (ERPs). In a surprise recognition memory test, words studied on the previous day were presented together with new words and lures that were expected but not presented in the study phase. ERPs recorded in the study phase were compared for subsequently remembered and forgotten words. Better memory performance for expected over unexpected words was electrophysiologically supported by a parietal subsequent memory effect (SME) reflecting enhanced item-specific encoding of contextually expected words. SC sentences not only facilitated the semantic integration of sentence-ending words, as reflected in reduced N400 amplitudes, but also enabled the rapid successful encoding of these words into memory, which is evidenced by an SC > WC pattern in memory performance and correlations between pre- and post-stimulus SMEs for SC sentences. In contrast, words processed in WC sentence contexts necessitated sustained elaborative encoding processes as reflected in a late frontal slow wave SME. Expected but not presented words were associated with high rates of false positive memory decisions, indicating that these words remained in a state of high accessibility in memory even one day after the study phase. These mnemonic costs of predictive processing were more pronounced for expected words from SC sentences than from WC sentences and could reflect the lingering of strong semantic predictions which were associated with the pre-updating of sentence representations.},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   A6

Zouhar, Vilém; Mosbach, Marius; Zhang, Miaoran; Klakow, Dietrich

Knowledge Base Index Compression via Dimensionality and Precision Reduction Inproceedings

Spa-NLP workshop at ACL 2022, 22nd-27th May 2022 Dublin, Ireland, 2022.

Recently neural network based approaches to knowledge-intensive NLP tasks, such as question answering, started to rely heavily on the combination of neural retrievers and readers. Retrieval is typically performed over a large textual knowledge base (KB) which requires significant memory and compute resources, especially when scaled up. On HotpotQA we systematically investigate reducing the size of the KB index by means of dimensionality (sparse random projections, PCA, autoencoders) and numerical precision reduction.
Our results show that PCA is an easy solution that requires very little data and is only slightly worse than autoencoders, which are less stable. All methods are sensitive to pre- and post-processing and data should always be centered and normalized both before and after dimension reduction. Finally, we show that it is possible to combine PCA with using 1bit per dimension. Overall we achieve (1) 100× compression with 75%, and (2) 24× compression with 92% original retrieval performance.

@inproceedings{Zouhar_2022_Base,
title = {Knowledge Base Index Compression via Dimensionality and Precision Reduction},
author = {Vil{\'e}m Zouhar and Marius Mosbach and Miaoran Zhang and Dietrich Klakow},
url = {https://arxiv.org/abs/2204.02906},
year = {2022},
date = {2022},
publisher = {Spa-NLP workshop at ACL 2022},
address = {22nd-27th May 2022 Dublin, Ireland},
abstract = {Recently neural network based approaches to knowledge-intensive NLP tasks, such as question answering, started to rely heavily on the combination of neural retrievers and readers. Retrieval is typically performed over a large textual knowledge base (KB) which requires significant memory and compute resources, especially when scaled up. On HotpotQA we systematically investigate reducing the size of the KB index by means of dimensionality (sparse random projections, PCA, autoencoders) and numerical precision reduction. Our results show that PCA is an easy solution that requires very little data and is only slightly worse than autoencoders, which are less stable. All methods are sensitive to pre- and post-processing and data should always be centered and normalized both before and after dimension reduction. Finally, we show that it is possible to combine PCA with using 1bit per dimension. Overall we achieve (1) 100× compression with 75%, and (2) 24× compression with 92% original retrieval performance.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B4

Dutta Chowdhury, Koel; Jalota, Rricha; van Genabith, Josef; España-Bonet, Cristina

Towards Debiasing Translation Artifacts Inproceedings

Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics, pp. 3983-3991, Seattle, United States, July 2022, 2022.

Cross-lingual natural language processing relies on translation, either by humans or machines, at different levels, from translating training data to translating test sets. However, compared to original texts in the same language, translations possess distinct qualities referred to as translationese. Previous research has shown that these translation artifacts influence the performance of a variety of cross-lingual tasks. In this work, we propose a novel approach to reducing translationese by extending an established bias-removal technique. We use the Iterative Null-space Projection (INLP) algorithm, and show by measuring classification accuracy before and after debiasing, that translationese is reduced at both sentence and word level. We evaluate the utility of debiasing translationese on a natural language inference (NLI) task, and show that by reducing this bias, NLI accuracy improves. To the best of our knowledge, this is the first study to debias translationese as represented in latent embedding space.

@inproceedings{Chowdhury_2022_Debiasing,
title = {Towards Debiasing Translation Artifacts},
author = {Koel Dutta Chowdhury and Rricha Jalota and Josef van Genabith and Cristina Espa{\~n}a-Bonet},
url = {https://aclanthology.org/2022.naacl-main.292/},
doi = {https://doi.org/10.18653/v1/2022.naacl-main.292},
year = {2022},
date = {2022},
booktitle = {Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
pages = {3983-3991},
publisher = {Association for Computational Linguistics},
address = {Seattle, United States, July 2022},
abstract = {Cross-lingual natural language processing relies on translation, either by humans or machines, at different levels, from translating training data to translating test sets. However, compared to original texts in the same language, translations possess distinct qualities referred to as translationese. Previous research has shown that these translation artifacts influence the performance of a variety of cross-lingual tasks. In this work, we propose a novel approach to reducing translationese by extending an established bias-removal technique. We use the Iterative Null-space Projection (INLP) algorithm, and show by measuring classification accuracy before and after debiasing, that translationese is reduced at both sentence and word level. We evaluate the utility of debiasing translationese on a natural language inference (NLI) task, and show that by reducing this bias, NLI accuracy improves. To the best of our knowledge, this is the first study to debias translationese as represented in latent embedding space.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B6

Mayn, Alexandra; Demberg, Vera

Pragmatics of Metaphor Revisited: Modeling the Role of Degree and Salience in Metaphor Understanding Inproceedings

Proceedings of the Annual Meeting of the Cognitive Science Society, 43(43), CogSci2022, pp. 3178ff., 2022.

Experimental pragmatics tells us that a metaphor conveys salient features of a vehicle and that highly typical featurestend to be salient. But can highly atypical features also be salient? When asking if John is loyal and hearing “John is afox”, will the hearer conclude that John is disloyal because loyalty is saliently atypical for a fox? This prediction followsfrom our RSA-based model of metaphor understanding which relies on gradient salience. Our behavioral experimentscorroborate the model’s predictions, providing evidence that high and low typicality are salient and result in high in-terpretation confidence and agreement, while average typicality is not salient and makes a metaphor confusing. Ourmodel implements the idea that other features of a vehicle, along with possible alternative vehicles, influence metaphorinterpretation. It produces a significantly better fit compared to an existing RSA model of metaphor understanding,supporting our predictions about the factors at play.

@inproceedings{Mayn_2022_of,
title = {Pragmatics of Metaphor Revisited: Modeling the Role of Degree and Salience in Metaphor Understanding},
author = {Alexandra Mayn and Vera Demberg},
url = {https://escholarship.org/uc/item/7kq207zs},
year = {2022},
date = {2022},
booktitle = {Proceedings of the Annual Meeting of the Cognitive Science Society, 43(43)},
pages = {3178ff.},
publisher = {CogSci2022},
abstract = {Experimental pragmatics tells us that a metaphor conveys salient features of a vehicle and that highly typical featurestend to be salient. But can highly atypical features also be salient? When asking if John is loyal and hearing “John is afox”, will the hearer conclude that John is disloyal because loyalty is saliently atypical for a fox? This prediction followsfrom our RSA-based model of metaphor understanding which relies on gradient salience. Our behavioral experimentscorroborate the model’s predictions, providing evidence that high and low typicality are salient and result in high in-terpretation confidence and agreement, while average typicality is not salient and makes a metaphor confusing. Ourmodel implements the idea that other features of a vehicle, along with possible alternative vehicles, influence metaphorinterpretation. It produces a significantly better fit compared to an existing RSA model of metaphor understanding,supporting our predictions about the factors at play.},
pubstate = {published},
type = {inproceedings}
}

Copy BibTeX to Clipboard

Project:   B2

Kravtchenko, Ekaterina; Demberg, Vera

Modeling atypicality inferences in pragmatic reasoning Journal Article

Proceedings of the Annual Meeting of the Cognitive Science Society, 44, CogSci 2022, pp. 1918-1924, Toronto, Canada, 2022.

Empirical studies have demonstrated that when comprehenders are faced with informationally redundant utterances, they may make pragmatic inferences (Kravtchenko & Demberg, 2015). Previous work has also shown that the strength of these inferences depends on prominence of the redundant utterance – if it is stressed prosodically, marked with an exclamation mark, or introduced with a discourse marker such as “Oh yeah”, atypicality inferences are stronger (Kravtchenko & Demberg, 2015, 2022; Ryzhova & Demberg, 2020). The goal of the present paper is to demonstrate how both the atypicality inference and the effect of prominence can be modelled using the rational speech act (RSA) framework. We show that atypicality inferences can be captured by introducing joint reasoning about the habituality of events, following Degen, Tessler, and Goodman (2015); Goodman and Frank (2016). However, we find that joint reasoning models principally cannot account for the effect of differences in utterance prominence. This is because prominence markers do not contribute to the truth-conditional meaning. We then proceed to demonstrate that leveraging a noisy channel model, which has previously been used to model low-level acoustic perception (Bergen & Goodman, 2015), can successfully account for the empirically observed patterns of utterance prominence.

@article{Kravtchenko_2022_atypicality,
title = {Modeling atypicality inferences in pragmatic reasoning},
author = {Ekaterina Kravtchenko and Vera Demberg},
url = {https://escholarship.org/uc/item/7630p08b},
year = {2022},
date = {2022},
journal = {Proceedings of the Annual Meeting of the Cognitive Science Society},
pages = {1918-1924},
publisher = {CogSci 2022},
address = {Toronto, Canada},
volume = {44},
number = {44},
abstract = {Empirical studies have demonstrated that when comprehenders are faced with informationally redundant utterances, they may make pragmatic inferences (Kravtchenko & Demberg, 2015). Previous work has also shown that the strength of these inferences depends on prominence of the redundant utterance – if it is stressed prosodically, marked with an exclamation mark, or introduced with a discourse marker such as “Oh yeah”, atypicality inferences are stronger (Kravtchenko & Demberg, 2015, 2022; Ryzhova & Demberg, 2020). The goal of the present paper is to demonstrate how both the atypicality inference and the effect of prominence can be modelled using the rational speech act (RSA) framework. We show that atypicality inferences can be captured by introducing joint reasoning about the habituality of events, following Degen, Tessler, and Goodman (2015); Goodman and Frank (2016). However, we find that joint reasoning models principally cannot account for the effect of differences in utterance prominence. This is because prominence markers do not contribute to the truth-conditional meaning. We then proceed to demonstrate that leveraging a noisy channel model, which has previously been used to model low-level acoustic perception (Bergen & Goodman, 2015), can successfully account for the empirically observed patterns of utterance prominence.},
pubstate = {published},
type = {article}
}

Copy BibTeX to Clipboard

Project:   A3

Successfully