2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDSPE-49.5
Paper Title HIGH-INTELLIGIBILITY SPEECH SYNTHESIS FOR DYSARTHRIC SPEAKERS WITH LPCNET-BASED TTS AND CYCLEVAE-BASED VC
Authors Keisuke Matsubara, Kobe University, Japan; Takuma Okamoto, National Institute of Information and Communications Technology, Japan; Ryoichi Takashima, Tetsuya Takiguchi, Kobe University, Japan; Tomoki Toda, Nagoya University, Japan; Yoshinori Shiga, Hisashi Kawai, National Institute of Information and Communications Technology, Japan
SessionSPE-49: Speech Synthesis 7: General Topics
LocationGather.Town
Session Time:Friday, 11 June, 11:30 - 12:15
Presentation Time:Friday, 11 June, 11:30 - 12:15
Presentation Poster
Topic Speech Processing: [SPE-SYNT] Speech Synthesis and Generation
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract This paper presents a high-intelligibility speech synthesis method for persons with dysarthria caused by athetoid cerebral palsy. The muscular control of such speakers is unstable because of their athetoid symptoms, and their pronunciation is unclear, which makes it difficult for them to communicate. In this paper, we present a method for generating highly intelligible speech that preserve the individuality of dysarthric speakers by combining Transformer-TTS, CycleVAE-VC, and a LPCNet vocoder. Rather than repairing prosody from the dysarthric speech, this method transfers the dysarthric speaker’s individuality to the speech of a healthy person generated by TTS synthesis. This task is both important and challenging. From the results of our evaluation experiments, we confirmed that the proposed method can partially transfer the individuality of the target dysarthric speaker while maintaining the intelligibility of the source speech.