2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDSPE-33.2
Paper Title BI-LEVEL STYLE AND PROSODY DECOUPLING MODELING FOR PERSONALIZED END-TO-END SPEECH SYNTHESIS
Authors Ruibo Fu, Jianhua Tao, Zhengqi Wen, Jiangyan Yi, Tao Wang, Chunyu Qiang, National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, China
SessionSPE-33: Speech Synthesis 5: Prosody & Style
LocationGather.Town
Session Time:Thursday, 10 June, 13:00 - 13:45
Presentation Time:Thursday, 10 June, 13:00 - 13:45
Presentation Poster
Topic Speech Processing: [SPE-SYNT] Speech Synthesis and Generation
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract End-to-end framework can generate high-quality and high-similarity speech in the personalized speech synthesis task. However, the generalization of out-of-domain texts is still a challenging task. Limited target data leads to unacceptable errors and poor prosody and similarity performance of the synthetic speech. In this paper, we present a bi-level function decoupling framework to realise separate modeling and controlling for solving above problems. Firstly, on the style representation modeling level, compared with the conventional methods that use single embedding to model all the text dependent discrepancies, it is proposed that the speaker embedding and prosody embedding are modeled separately based on the reference audio and phonetic posteriorgram (PPG) by a multi-head attention mechanism. Secondly, on the model structure level, the decoder model structure is factored into average-net and adaptation-net, where the duration prosody controlling and speaker timbre imitation are mainly designed in relatively separate areas. Experimental results on Mandarin dataset show that the proposed methods lead to an improvement on both robustness, naturalness and similarity.