2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDHLT-14.5
Paper Title Label-aware Text Representation for Multi-label Text Classification
Authors Hao Guo, Xiangyang Li, Lei Zhang, Jia Liu, Wei Chen, Alibaba Group, China
SessionHLT-14: Language Representations
LocationGather.Town
Session Time:Thursday, 10 June, 14:00 - 14:45
Presentation Time:Thursday, 10 June, 14:00 - 14:45
Presentation Poster
Topic Human Language Technology: [HLT-MLMD] Machine Learning Methods for Language
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Multi-label text classification (MLTC) is an important task in natural language processing (NLP), which is appealing to researchers in both academia and industry. However, few of studies have been conducted on the relations among the labels. Most of existing methods tend to neglect the semantic information between labels and words. In this paper, we propose a label-aware network to obtain both the label correlation and text representation. A heterogeneous graph is built from words and labels to learn the label representation by metapath2vec, since two nearby labels or words in the graph have similar relation and the graph structure is beneficial for label representation as well. Each part of the text contributes differently to label inference, therefore bidirectional attention flow is exploited for label-aware text representation in two directions: from text to label and from label to text. Experimental evaluations illustrate that the proposed method outperforms various baselines on both offline benchmarks and real-world online systems.