Generalizing across Languages and Domains for Discourse Relation Classification Inproceedings
Kawahara, Tatsuya; Demberg, Vera; Ultes, Stefan; Inoue, Koji; Mehri, Shikib; Howcroft, David; Komatani, Kazunori (Ed.): Proceedings of the 25th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Association for Computational Linguistics, pp. 554-565, Kyoto, Japan, 2024.The availability of corpora annotated for discourse relations is limited and discourse relation classification performance varies greatly depending on both language and domain. This is a problem for downstream applications that are intended for a language (i.e., not English) or a domain (i.e., not financial news) with comparatively low coverage for discourse annotations. In this paper, we experiment with a state-of-the-art model for discourse relation classification, originally developed for English, extend it to a multi-lingual setting (testing on Italian, Portuguese and Turkish), and employ a simple, yet effective method to mark out-of-domain training instances. By doing so, we aim to contribute to better generalization and more robust discourse relation classification performance across both language and domain.
@inproceedings{bourgonje-demberg-2024-generalizing,
title = {Generalizing across Languages and Domains for Discourse Relation Classification},
author = {Peter Bourgonje and Vera Demberg},
editor = {Tatsuya Kawahara and Vera Demberg and Stefan Ultes and Koji Inoue and Shikib Mehri and David Howcroft and Kazunori Komatani},
url = {https://aclanthology.org/2024.sigdial-1.47/},
doi = {https://doi.org/10.18653/v1/2024.sigdial-1.47},
year = {2024},
date = {2024},
booktitle = {Proceedings of the 25th Annual Meeting of the Special Interest Group on Discourse and Dialogue},
pages = {554-565},
publisher = {Association for Computational Linguistics},
address = {Kyoto, Japan},
abstract = {The availability of corpora annotated for discourse relations is limited and discourse relation classification performance varies greatly depending on both language and domain. This is a problem for downstream applications that are intended for a language (i.e., not English) or a domain (i.e., not financial news) with comparatively low coverage for discourse annotations. In this paper, we experiment with a state-of-the-art model for discourse relation classification, originally developed for English, extend it to a multi-lingual setting (testing on Italian, Portuguese and Turkish), and employ a simple, yet effective method to mark out-of-domain training instances. By doing so, we aim to contribute to better generalization and more robust discourse relation classification performance across both language and domain.},
pubstate = {published},
type = {inproceedings}
}
Project: B2