2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

Technical Program

Paper Detail

Paper IDSPE-34.4
Paper Title DISENTANGLED SPEAKER AND LANGUAGE REPRESENTATIONS USING MUTUAL INFORMATION MINIMIZATION AND DOMAIN ADAPTATION FOR CROSS-LINGUAL TTS
Authors Detai Xin, University of Tokyo, Japan; Tatsuya Komatsu, LINE Corporation, Japan; Shinnosuke Takamichi, Hiroshi Saruwatari, University of Tokyo, Japan
SessionSPE-34: Speech Synthesis 6: Data Augmentation & Adaptation
LocationGather.Town
Session Time:Thursday, 10 June, 13:00 - 13:45
Presentation Time:Thursday, 10 June, 13:00 - 13:45
Presentation Poster
Topic Speech Processing: [SPE-SYNT] Speech Synthesis and Generation
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Virtual Presentation  Click here to watch in the Virtual Conference
Abstract We propose a method for obtaining disentangled speaker and language representations via mutual information minimization and domain adaptation for cross-lingual text-to-speech (TTS) synthesis. The proposed method extracts speaker and language embeddings from acoustic features by a speaker encoder and a language encoder. Then the proposed method applies domain adaptation on the two embeddings to obtain language-invariant speaker embedding and speaker-invariant language embedding. To get more disentangled representations, the proposed method further uses mutual information minimization between the two embeddings to remove entangled information within each embedding. Disentangled representations of speaker and language are critical for cross-lingual TTS synthesis since entangled representations make it difficult to maintain speaker identity information when changing the language representation and consequently causes performance degradation. We evaluate the proposed method using English and Japanese multi-speaker datasets with a total of 207 speakers. Experimental result demonstrates that the proposed method significantly improves the naturalness and speaker similarity of both intra-lingual and cross-lingual TTS synthesis. Furthermore, we show that the proposed method has a good capability of maintaining the speaker identity between languages.