2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDIVMSP-9.3
Paper Title KAN: KNOWLEDGE-AUGMENTED NETWORKS FOR FEW-SHOT LEARNING
Authors Zeyang Zhu, Xin Lin, East China Normal University, China
SessionIVMSP-9: Zero and Few Short Learning
LocationGather.Town
Session Time:Wednesday, 09 June, 13:00 - 13:45
Presentation Time:Wednesday, 09 June, 13:00 - 13:45
Presentation Poster
Topic Image, Video, and Multidimensional Signal Processing: [IVTEC] Image & Video Processing Techniques
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Few-shot learning task aims to explore a model that is able to quickly learn new concepts by learning a few examples. The current approaches learning new categories with few images or even a single image are only based on the the visual modality. However, it is difficult to learn the representative features of new categories by a few images. This is because some categories are similar in vision. Moreover, due to the viewpoint, luminosity and that sometimes individuals of the same species appear markedly different from one another, the models are not able to learn the exact representation of classes. Therefore, considering that semantic information can enhance understanding when visual information is limited, we propose Knowledge-Augmented Networks (KAN), which combines the visual features with the semantic information extracted from knowledge graph to represent the features of each class. We demonstrate the effectiveness of our method on standard few-shot learning tasks, and further observe that with the augmented semantic information from knowledge graph, KAN is able to learn more disentangled representations. Experiments show that our model outperforms the state-of-the-art methods.