Low-Resource Cross-Lingual Adaptive Training for Nigerian Pidgin Inproceedings
Proceedings of the 24th INTERSPEECH conference, 2023.Developing effective spoken language processing systems for low-resource languages poses several challenges due to the lack of parallel data and limited resources for fine-tuning models. In this work, we target on improving upon both text classification and translation of Nigerian Pidgin (Naija) by collecting a large-scale parallel English-Pidgin corpus and further propose a framework of cross-lingual adaptive training that includes both continual and task adaptive training so as to adapt a base pre-trained model to low-resource languages. Our studies show that English pre-trained language models serve as a stronger prior than multilingual language models on English-Pidgin tasks with up to 2.38 BLEU improvements; and demonstrate that augmenting orthographic data and using task adaptive training with back-translation can have a significant impact on model performance.
@inproceedings{lin-et-al-2023,
title = {Low-Resource Cross-Lingual Adaptive Training for Nigerian Pidgin},
author = {Pin-Jie Lin and Muhammed Saeed and Ernie Chang and Merel Scholman},
url = {https://arxiv.org/abs/2307.00382},
year = {2023},
date = {2023},
booktitle = {Proceedings of the 24th INTERSPEECH conference},
abstract = {Developing effective spoken language processing systems for low-resource languages poses several challenges due to the lack of parallel data and limited resources for fine-tuning models. In this work, we target on improving upon both text classification and translation of Nigerian Pidgin (Naija) by collecting a large-scale parallel English-Pidgin corpus and further propose a framework of cross-lingual adaptive training that includes both continual and task adaptive training so as to adapt a base pre-trained model to low-resource languages. Our studies show that English pre-trained language models serve as a stronger prior than multilingual language models on English-Pidgin tasks with up to 2.38 BLEU improvements; and demonstrate that augmenting orthographic data and using task adaptive training with back-translation can have a significant impact on model performance.},
pubstate = {published},
type = {inproceedings}
}
Project: B2