2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDMLSP-40.3
Paper Title CONTRASTIVE SEMI-SUPERVISED LEARNING FOR ASR
Authors Alex Xiao, Christian Fuegen, Abdelrahman Mohamed, Facebook, United States
SessionMLSP-40: Contrastive Learning
LocationGather.Town
Session Time:Friday, 11 June, 11:30 - 12:15
Presentation Time:Friday, 11 June, 11:30 - 12:15
Presentation Poster
Topic Machine Learning for Signal Processing: [MLR-SSUP] Self-supervised and semi-supervised learning
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Pseudo-labeling is the most adopted method for pre-training automatic speech recognition (ASR) models. However, its performance suffers with degrading quality of the supervised teacher model. Inspired by the successes of contrastive representation learning for both computer vision and speech applications, and more recently for supervised learning of visual objects[1], we propose Contrastive Semi-supervised Learning (CSL). CSL eschews directly predicting teacher generated pseudo-labels in favor of utilizing them to select positive and negative examples. In the challenging task of transcribing public social media videos, using CSL reduces the WER by 8%, compared to the standard Cross-Entropy pseudo-labeling (CE-PL), when 10hr of supervised data is used to annotate 75,000hr of videos. The WER reduction jumps to 19% under the ultra low-resource condition of using 1hr labels for teacher supervision. In out-of-domain conditions, CSL generalizes much better showing up to 17% WER reduction compared to the strongest CE-PL pre-trained model.