2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDHLT-18.5
Paper Title Enhancing Deep Paraphrase Identification via Leveraging Word Alignment Information
Authors Boxin Li, Tingwen Liu, Institute of Information Engineering, Chinese Academy of Sciences, China; Bin Wang, Xiaomi AI Lab, China; Lihong Wang, National Computer Network Emergency Response Technical Team Coordination Center of China, China
SessionHLT-18: Language Understanding 6: Summarization and Comprehension
LocationGather.Town
Session Time:Friday, 11 June, 13:00 - 13:45
Presentation Time:Friday, 11 June, 13:00 - 13:45
Presentation Poster
Topic Human Language Technology: [HLT-MLMD] Machine Learning Methods for Language
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Recent deep learning based methods have achieved impressive performance on paraphrase identification (PI), a fundamental NLP task, judging whether two sentences are semantically equivalent or not. However, their success heavily relies on massive labeled samples, which are time-consuming and expensive to obtain. To alleviate this problem, this study explores the effect of word alignment information (WAI), extracted by existing monolingual alignment tools, on deep PI baseline models. Apart from directly encoding WAI into fixed-size embeddings, we propose a novel auxiliary task so that the baselines can be pre-trained using a large amount of unlabeled in-domain data. Moreover, our proposed auxiliary task can also jointly train with the baselines, aiming to eliminate the overheads of preprocessing WAI at the test period. Experimental results verify that our methods can significantly outperform the deep PI baseline model.