2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDSPE-22.2
Paper Title MIXTURE OF INFORMED EXPERTS FOR MULTILINGUAL SPEECH RECOGNITION
Authors Neeraj Gaur, Brian Farris, Parisa Haghani, Isabel Leal, Pedro J. Moreno, Manasa Prasad, Bhuvana Ramabhadran, Yun Zhu, Google Inc., United States
SessionSPE-22: Speech Recognition 8: Multilingual Speech Recognition
LocationGather.Town
Session Time:Wednesday, 09 June, 15:30 - 16:15
Presentation Time:Wednesday, 09 June, 15:30 - 16:15
Presentation Poster
Topic Speech Processing: [SPE-MULT] Multilingual Recognition and Identification
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract When trained on related or low-resource languages, multilin-gual speech recognition models often outperform their mono-lingual counterparts. However, these models can suffer fromloss in performance for high resource or unrelated languages.We investigate the use of a mixture-of-experts approach toassign per-language parameters in the model to increase net-work capacity in a structured fashion. We introduce a novelvariant of this approach, ‘informed experts’, which attemptsto tackle inter-task conflicts by eliminating gradients fromother tasks in the these task-specific parameters. We conductexperiments on a real-world task with English, French andfour dialects of Arabic to show the effectiveness of our ap-proach. Our model matches or outperforms the monolingualmodels for almost all languages, with gains of as much as31% relative. Our model also outperforms the baseline mul-tilingual model for all languages, with gains as large as 9%