2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDIVMSP-9.2
Paper Title REPRESENTATIVE LOCAL FEATURE MINING FOR FEW-SHOT LEARNING
Authors Kun Yan, Peking University, China; Lingbo Liu, Sun Yat-Sen University, China; Jun Hou, Sensetime, China; Ping Wang, Peking University, China
SessionIVMSP-9: Zero and Few Short Learning
LocationGather.Town
Session Time:Wednesday, 09 June, 13:00 - 13:45
Presentation Time:Wednesday, 09 June, 13:00 - 13:45
Presentation Poster
Topic Image, Video, and Multidimensional Signal Processing: [IVTEC] Image & Video Processing Techniques
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Few-shot learning aims to recognize unseen images of new classes with only a few training examples. While great progress has been made with deep learning technology, most metric-based works rely on the measurement based on global feature representation of images, which is sensitive to background factors due to the scarcity of training data. Given this, we propose a novel method that chooses representative local features to facilitate few-shot learning. Specifically, we propose a “task-specific guided” strategy to mine local features that are task-specific and discriminative. For each task, we first mine representative local features for labeled images by a loss guided mechanism. Then these local features are used to guide a classifier to mine representative local features for unlabeled images. In this way, task-specific representative local features can be selected for better classification. We empirically show our method can effectively alleviate the negative effect introduced by background factors. Extensive experiments on two few-shot benchmarks show the effectiveness of the proposed method.