Publications

Varghese, Nobel; Yung, Frances Pik Yu; Anuranjana, Kaveri; Demberg, Vera

Exploiting Knowledge about Discourse Relations for Implicit Discourse Relation Classification

Strube, Michael; Braud, Chloe; Hardmeier, Christian; Jessy Li, Junyi; Loaiciga, Sharid; Zeldes, Amir (Ed.): Proceedings of the 4th Workshop on Computational Approaches to Discourse (CODI 2023), Association for Computational Linguistics, pp. 99-105, Toronto, Canada, 2023.

In discourse relation recognition, the classification labels are typically represented as one-hot vectors. However, the categories are in fact not all independent of one another on the contrary, there are several frameworks that describe the labels‘ similarities (by e.g. sorting them into a hierarchy or describing them interms of features (Sanders et al., 2021)). Recently, several methods for representing the similarities between labels have been proposed (Zhang et al., 2018; Wang et al., 2018; Xiong et al., 2021). We here explore and extend the Label Confusion Model (Guo et al., 2021) for learning a representation for discourse relation labels. We explore alternative ways of informing the model about the similarities between relations, by representing relations in terms of their names (and parent category), their typical markers, or in terms of CCR features that describe the relations. Experimental results show that exploiting label similarity improves classification results.

Back

Successfully