2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

Technical Program

Paper Detail

Paper IDHLT-1.5
Paper Title IMPROVING ENTITY RECALL IN AUTOMATIC SPEECH RECOGNITION WITH NEURAL EMBEDDINGS
Authors Christopher Li, Pat Rondon, Diamantino Caseiro, Leonid Velikovich, Xavier Velez, Petar Aleksic, Google, United States
SessionHLT-1: Language Modeling 1: Fusion and Training for End-to-End ASR
LocationGather.Town
Session Time:Tuesday, 08 June, 13:00 - 13:45
Presentation Time:Tuesday, 08 June, 13:00 - 13:45
Presentation Poster
Topic Speech Processing: [SPE-GASR] General Topics in Speech Recognition
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Virtual Presentation  Click here to watch in the Virtual Conference
Abstract Automatic speech recognition (ASR) systems often have difficulty recognizing long-tail entities such as contact names and local restaurant names, which usually do not occur, or occur infrequently, in the system’s training data. In this work, we present a method which uses learned text embeddings and nearest neighbor retrieval within a large database of entity embeddings to correct misrecognitions. Our text embeddings are produced by a neural network trained so that the embeddings of acoustically confusable phrases have low cosine distances. Given the embedding of the text of a potential entity misrecognition and a precomputed database containing entities and their corresponding embeddings, we use fast, scalable nearest neighbor retrieval algorithms to find candidate corrections within the database. The inserted candidates are then scored using a function of the original text’s cost in the lattice and the distance between the embedding of the original text and the embedding of the candidate correction. Using this lattice augmentation techique, we demonstrate a 46% reduction in word error rate (WER) and 46% reduction in oracle word error rate (OWER) on evaluation sets with popular film queries.