2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDIVMSP-26.1
Paper Title ATTENTIONLITE: TOWARDS EFFICIENT SELF-ATTENTION MODELS FOR VISION
Authors Souvik Kundu, University of Southern California, United States; Sairam Sundaresan, Intel Labs, United States
SessionIVMSP-26: Attention for Vision
LocationGather.Town
Session Time:Thursday, 10 June, 16:30 - 17:15
Presentation Time:Thursday, 10 June, 16:30 - 17:15
Presentation Poster
Topic Image, Video, and Multidimensional Signal Processing: [IVTEC] Image & Video Processing Techniques
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract We propose a novel framework for producing a class of parameter and compute efficient models called AttentionLite suitable for resource-constrained applications. Prior work has primarily focused on optimizing models either via knowledge distillation or pruning. In addition to fusing these two mechanisms, our joint optimization framework also leverages recent advances in self-attention as a substitute for convolutions. We can simultaneously distill knowledge from a compute-heavy teacher while also pruning the student model in a single pass of training thereby reducing training and fine-tuning times considerably. We evaluate the merits of our proposed approach on the CIFAR-10, CIFAR-100, and Tiny-ImageNet datasets. Not only do our AttentionLite models significantly outperform their unoptimized counterparts inaccuracy, we find that in some cases, that they perform almost as well as their compute-heavy teachers while consuming only a fraction of the parameters and FLOPs. Concretely, AttentionLite models can achieve up to 30x parameter efficiency and 2x computation efficiency with no significant accuracy drop compared to their teacher.