2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

Technical Program

Paper Detail

Paper IDCHLG-3.1
Paper Title Investigating on Incorporating Pretrained and Learnable Speaker Representations for Multi-Speaker Multi-Style Text-to-Speech
Authors Chung-Ming Chien, Jheng-Hao Lin, Chien-yu Huang, Po-chun Hsu, Hung-yi Lee, National Taiwan University, Taiwan
SessionCHLG-3: Multi-Speaker Multi-Style Voice Cloning Challenge (M2VoC)
LocationZoom
Session Time:Monday, 07 June, 15:30 - 17:45
Presentation Time:Monday, 07 June, 15:30 - 17:45
Presentation Poster
Topic Grand Challenge: Multi-Speaker Multi-Style Voice Cloning Challenge (M2VoC)
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Virtual Presentation  Click here to watch in the Virtual Conference
Abstract The few-shot multi-speaker multi-style voice cloning task is to synthesize utterances with voice and speaking style similar to a reference speaker given only a few reference samples. In this work, we investigate different speaker representations and proposed to integrate pretrained and learnable speaker representations. Among different types of embeddings, the embedding pretrained by voice conversion achieves the best performance. The FastSpeech 2 model combined with both pretrained and learnable speaker representations shows great generalization ability on few-shot speakers and achieved 2ndplace in the one-shot track of the ICASSP 2021 M2VoC challenge.