Visual Coherence Loss for Coherent and Visually Grounded Story Generation Inproceedings
Rogers, Anna; Boyd-Graber, Jordan; Okazaki, Naoaki (Ed.): Findings of the Association for Computational Linguistics: ACL 2023, Association for Computational Linguistics, pp. 9456-9470, Toronto, Canada, 2023.Local coherence is essential for long-form text generation models. We identify two important aspects of local coherence within the visual storytelling task: (1) the model needs to represent re-occurrences of characters within the image sequence in order to mention them correctly in the story; (2) character representations should enable us to find instances of the same characters and distinguish different characters. In this paper, we propose a loss function inspired by a linguistic theory of coherence for self-supervised learning for image sequence representations. We further propose combining features from an object and a face detector to construct stronger character features. To evaluate input-output relevance that current reference-based metrics don{‚}t measure, we propose a character matching metric to check whether the models generate referring expressions correctly for characters in input image sequences. Experiments on a visual story generation dataset show that our proposed features and loss function are effective for generating more coherent and visually grounded stories.
@inproceedings{hong-etal-2023-visual,
title = {Visual Coherence Loss for Coherent and Visually Grounded Story Generation},
author = {Xudong Hong and Vera Demberg and Asad Sayeed and Qiankun Zheng and Bernt Schiele},
editor = {Anna Rogers and Jordan Boyd-Graber and Naoaki Okazaki},
url = {https://aclanthology.org/2023.findings-acl.603},
doi = {https://doi.org/10.18653/v1/2023.findings-acl.603},
year = {2023},
date = {2023},
booktitle = {Findings of the Association for Computational Linguistics: ACL 2023},
pages = {9456-9470},
publisher = {Association for Computational Linguistics},
address = {Toronto, Canada},
abstract = {Local coherence is essential for long-form text generation models. We identify two important aspects of local coherence within the visual storytelling task: (1) the model needs to represent re-occurrences of characters within the image sequence in order to mention them correctly in the story; (2) character representations should enable us to find instances of the same characters and distinguish different characters. In this paper, we propose a loss function inspired by a linguistic theory of coherence for self-supervised learning for image sequence representations. We further propose combining features from an object and a face detector to construct stronger character features. To evaluate input-output relevance that current reference-based metrics don{'}t measure, we propose a character matching metric to check whether the models generate referring expressions correctly for characters in input image sequences. Experiments on a visual story generation dataset show that our proposed features and loss function are effective for generating more coherent and visually grounded stories.},
pubstate = {published},
type = {inproceedings}
}
Project: A3