2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDMLSP-1.1
Paper Title META-LEARNING WITH ATTENTION FOR IMPROVED FEW-SHOT LEARNING
Authors Zejiang Hou, Princeton University, United States; Anwar Walid, Nokia Bell Labs, United States; Sun-Yuan Kung, Princeton University, United States
SessionMLSP-1: Deep Learning Training Methods 1
LocationGather.Town
Session Time:Tuesday, 08 June, 13:00 - 13:45
Presentation Time:Tuesday, 08 June, 13:00 - 13:45
Presentation Poster
Topic Machine Learning for Signal Processing: [MLR-DEEP] Deep learning techniques
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract We consider few-shot learning (FSL), where a model learns from very few labeled examples such that it can generalize to unseen examples. Model-agnostic meta-learning (MAML) has been proposed to solve FSL. However, the low performance of MAML suggests its difficulty in tackle diverse tasks, due to the restriction of sharing a single model initialization for fast adaptation. In this paper, we propose meta-learning with attention mechanisms. Our method meta-learns attention modules to instantiate task-specific model initialization for fast adaptation, which can obtain high-quality solution to a new task using few gradient descent steps. To further improve generalization during inference, we propose to incorporate an entropy regularizer into the adaptation objective to penalize the Shannon entropy of prediction probability. Extensive experiments under various FSL scenarios show that our method achieves state-of-the-art performance on the mini-ImageNet and tiered-ImageNet.