Learning to Explicitate Connectives with Seq2Seq Network for Implicit Discourse Relation Classification Inproceedings
In Proceedings of the 13th International Conference on Computational Semantics, Association for Computational Linguistics, pp. 188-199, Gothenburg, Sweden, 2019.Implicit discourse relation classification is one of the most difficult steps in discourse parsing. The difficulty stems from the fact that the coherence relation must be inferred based on the content of the discourse relational arguments. Therefore, an effective encoding of the relational arguments is of crucial importance. We here propose a new model for implicit discourse relation classification, which consists of a classifier, and a sequence-to-sequence model which is trained to generate a representation of the discourse relational arguments by trying to predict the relational arguments including a suitable implicit connective. Training is possible because such implicit connectives have been annotated as part of the PDTB corpus. Along with a memory network, our model could generate more refined representations for the task. And on the now standard 11-way classification, our method outperforms the previous state of the art systems on the PDTB benchmark on multiple settings including cross validation.
@inproceedings{Shi2019b,
title = {Learning to Explicitate Connectives with Seq2Seq Network for Implicit Discourse Relation Classification},
author = {Wei Shi and Vera Demberg},
url = {https://aclanthology.org/W19-0416},
doi = {https://doi.org/10.18653/v1/W19-0416},
year = {2019},
date = {2019},
booktitle = {In Proceedings of the 13th International Conference on Computational Semantics},
pages = {188-199},
publisher = {Association for Computational Linguistics},
address = {Gothenburg, Sweden},
abstract = {Implicit discourse relation classification is one of the most difficult steps in discourse parsing. The difficulty stems from the fact that the coherence relation must be inferred based on the content of the discourse relational arguments. Therefore, an effective encoding of the relational arguments is of crucial importance. We here propose a new model for implicit discourse relation classification, which consists of a classifier, and a sequence-to-sequence model which is trained to generate a representation of the discourse relational arguments by trying to predict the relational arguments including a suitable implicit connective. Training is possible because such implicit connectives have been annotated as part of the PDTB corpus. Along with a memory network, our model could generate more refined representations for the task. And on the now standard 11-way classification, our method outperforms the previous state of the art systems on the PDTB benchmark on multiple settings including cross validation.},
pubstate = {published},
type = {inproceedings}
}
Project: B2