Synthesis of Tongue Motion and Acoustics from Text using a Multimodal Articulatory Database Journal Article
IEEE/ACM Transactions on Audio, Speech, and Language Processing, 25, pp. 2351-2361, 2017.We present an end-to-end text-to-speech (TTS) synthesis system that generates audio and synchronized tongue motion directly from text. This is achieved by adapting a 3D model of the tongue surface to an articulatory dataset and training a statistical parametric speech synthesis system directly on the tongue model parameters. We evaluate the model at every step by comparing the spatial coordinates of predicted articulatory movements against the reference data. The results indicate a global mean Euclidean distance of less than 2.8 mm, and our approach can be adapted to add an articulatory modality to conventional TTS applications without the need for extra data.
@article{Steiner2017TASLP,
title = {Synthesis of Tongue Motion and Acoustics from Text using a Multimodal Articulatory Database},
author = {Ingmar Steiner and S{\'e}bastien Le Maguer and Alexander Hewer},
url = {https://arxiv.org/abs/1612.09352},
year = {2017},
date = {2017},
journal = {IEEE/ACM Transactions on Audio, Speech, and Language Processing},
pages = {2351-2361},
volume = {25},
number = {12},
abstract = {We present an end-to-end text-to-speech (TTS) synthesis system that generates audio and synchronized tongue motion directly from text. This is achieved by adapting a 3D model of the tongue surface to an articulatory dataset and training a statistical parametric speech synthesis system directly on the tongue model parameters. We evaluate the model at every step by comparing the spatial coordinates of predicted articulatory movements against the reference data. The results indicate a global mean Euclidean distance of less than 2.8 mm, and our approach can be adapted to add an articulatory modality to conventional TTS applications without the need for extra data.},
pubstate = {published},
type = {article}
}
Project: C5