Visatronic: A Multimodal Decoder-Only Model for Speech Synthesis

Apple

Abstract

In this paper, we propose a new task - generating speech from videos of people and their transcripts (VTTS) - to motivate new techniques for multimodal speech generation. This task generalizes the task of generating speech from cropped lip videos, and is also more complicated than the task of generating generic audio clips (e.g., dog barking) from videos and text. Multilingual versions of the task could lead to new techniques for cross-lingual dubbing. We also present a decoder-only multimodal model for this task, which we call Visatronic. This model embeds vision, text and speech directly into the common subspace of a transformer model and uses an autoregressive loss to learn a generative model of discretized mel-spectrograms conditioned on speaker videos and transcripts of their speech. By embedding all modalities into a common subspace, Visatronic can achieve improved results over models that use only text or video as input. Further, it presents a much simpler approach for multimodal speech generation compared to prevailing approaches which rely on lip-detectors and complicated architectures to fuse modalities while producing better results. Since the model is flexible enough to accommodate different ways of ordering inputs as a sequence, we carefully explore different strategies to better understand the best way to propagate information to the generative steps. To facilitate further research on VTTS, we will release (i) our code, (ii) clean transcriptions for the large-scale VoxCeleb2 dataset, and (iii) a standardized evaluation protocol for VTTS incorporating both objective and subjective metrics.

Success Cases

VTS TTS

VTTS

VT-ordered

VTTS

TV-ordered

VTTS

TV-streaming

id04232_Ky7pkJ4URUs_00164 (0.67) (0.26) (0.28) (0.61)
id04276_yZOt_4-ckww_00480 (0.26) (0.24) (0.16) (0.15)
id07354_w6xR5-Xue9o_00463 (0.17) (0.50) (0.28) (0.13)
id04094_vJbJ_JD8dIg_00448 (0.20) (0.18) (0.20) (0.4)
id05124_uPPqcWVd-dg_00455 (0.18) (0.36) (0.12) (0.12)
id06692_wSyTmuuX5zk_00498 (0.10) (0.15) (0.14) (0.50)

Table 1: The data is from the VoxCeleb2 dataset. Samples are generated by Visatronic with different conditioning. (AlignMetric) is proveded for samples, measured in seconds: the lower the better synchronization between audio and video.

Failure Cases

VTS TTS

VTTS

VT-ordered

VTTS

TV-ordered

VTTS

TV-streaming

id02086_zG7Qbte1KIg_00490 (0.36) (0.23) (0.22) (0.35)
id05124_ksBpd5sIcA4_00380 (0.71) (0.33) (0.29) (0.33)
id07620_ynUjo99Gzbk_00481 (0.60) (0.40) (0.28) (0.41)
id02086_a4Y4afR7XWo_00349 (2.11) (0.43) (2.11) (1.45)
id04094_rqaQ-0QVXnE_00416 (0.28) (0.39) (0.24) (0.34)
id08374_zAQvDHZR--g_00476 (0.18) (0.40) (0.30) (0.60)

Table 2: The data is from the VoxCeleb2 dataset. Samples are generated by Visatronic with different conditioning. (AlignMetric) is proveded for samples, measured in seconds: the lower the better synchronization between audio and video.

BibTeX

@article{gupta2024visatronic,
    title={Visatronic: A Multimodal Decoder-Only Model for Speech Synthesis},
    author={Gupta, Akshita, and Likhomanenko, Tatiana and Yang, Karren, and Bai, He and Aldeneh, Zakaria and Jaitly, Navdeep},
    journal={arXiv preprint arXiv:2411.17690},
    year={2024}
}