2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDHLT-16.2
Paper Title MAKING PUNCTUATION RESTORATION ROBUST AND FAST WITH MULTI-TASK LEARNING AND KNOWLEDGE DISTILLATION
Authors Michael Hentschel, Emiru Tsunoo, Takao Okuda, Sony Corporation, Japan
SessionHLT-16: Applications in Natural Language
LocationGather.Town
Session Time:Thursday, 10 June, 16:30 - 17:15
Presentation Time:Thursday, 10 June, 16:30 - 17:15
Presentation Poster
Topic Human Language Technology: [HLT-MLMD] Machine Learning Methods for Language
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract In punctuation restoration, we try to recover the missing punctuation from automatic speech recognition output to improve understandability. Currently, large pre-trained transformers such as BERT set the benchmark on this task but there are two main drawbacks to these models. First, the pre-training data does not match the output data from speech recognition that contains errors. Second, the large number of model parameters increases inference time. To address the former, we use a multi-task learning framework with ELECTRA, a recently proposed improvement on BERT, that has a generator-discriminator structure. The generator allows us to inject errors into the training data and, as our experiments show, this improves robustness against speech recognition errors during inference. To address the latter, we investigate knowledge distillation and parameter pruning of ELECTRA. In our experiments on the IWSLT 2012 benchmark data, a model with less than 11% the size of BERT achieved better performance while having an 82% faster inference time.