2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

Technical Program

Paper Detail

Paper IDHLT-11.2
Paper Title BOOSTING LOW-RESOURCE INTENT DETECTION WITH IN-SCOPE PROTOTYPICAL NETWORKS
Authors Hongzhan Lin, Yuanmeng Yan, Guang Chen, Beijing University of Posts and Telecommunications, China
SessionHLT-11: Language Understanding 3: Speech Understanding - General Topics
LocationGather.Town
Session Time:Thursday, 10 June, 13:00 - 13:45
Presentation Time:Thursday, 10 June, 13:00 - 13:45
Presentation Poster
Topic Human Language Technology: [HLT-UNDE] Spoken Language Understanding and Computational Semantics
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Virtual Presentation  Click here to watch in the Virtual Conference
Abstract Identifying intentions from users can help improve the response quality of task-oriented dialogue systems. How to use only limited labeled in-domain (ID) examples for zero-shot unknown intent detection and few-shot ID classification is a more challenging task in spoken language understanding. Existing related methods heavily rely upon the multi-domain datasets containing large-scale independent source domains for meta-training. In this paper, we propose a universal In-scope Prototypical Networks for low-resource intent detection to be general to dialogue meta-train datasets lacking widely-varying domains, which focuses on the scope of episodic intent classes to construct meta-task dynamically. Also, we introduce loss with margin principle to better distinguish samples. Experiments on two benchmark datasets show that our model consistently outperforms other baselines on zero-shot unknown intent detection without deteriorating the competitive performance on few-shot ID classification.