2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDAUD-26.3
Paper Title Phoneme-Based Distribution Regularization for Speech Enhancement
Authors Yajing Liu, USTC, China; Xiulian Peng, Microsoft Research Asia, China; Zhiwei Xiong, USTC, China; Yan Lu, Microsoft Research Asia, China
SessionAUD-26: Signal Enhancement and Restoration 3: Signal Enhancement
LocationGather.Town
Session Time:Thursday, 10 June, 16:30 - 17:15
Presentation Time:Thursday, 10 June, 16:30 - 17:15
Presentation Poster
Topic Audio and Acoustic Signal Processing: [AUD-SEN] Signal Enhancement and Restoration
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Existing speech enhancement methods mainly focus on the signal level similarity of the enhanced speech and the target. They do not pay attention to understanding the whole speech and context. Therefore, the recognizability and coherence of the enhanced speech are impaired. To address this problem, we propose a phoneme-based distribution regularization (PbDr) for speech enhancement, which aims to incorporate context information into speech enhancement network in a conditional manner to achieve better perceptual quality and better recognizability. As different phonemes always lead to different feature distributions in frequency, we propose to learn a parameter pair, i.e. scale and bias, through a phoneme classification vector to modulate the speech enhancement network. The modulation parameter pair not only includes frame-wise condition but also frequency-wise condition, which effectively map features to phoneme-related distributions. In this way, we explicitly regularize speech enhancement features by recognition vectors and the semantic information effectively helps to improve the recognizability and coherence of the enhanced speech. Experiments on public datasets demonstrate the effectiveness of PbDr in achieving both better perceptual quality and recognizability.