2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

Technical Program

Paper Detail

Paper IDMLSP-31.3
Paper Title ADAPTIVE RE-BALANCING NETWORK WITH GATE MECHANISM FOR LONG-TAILED VISUAL QUESTION ANSWERING
Authors Hongyu Chen, Ruifang Liu, Han Fang, Ximing Zhang, Beijing University of Posts and Telecommunications, China
SessionMLSP-31: Recommendation Systems
LocationGather.Town
Session Time:Thursday, 10 June, 14:00 - 14:45
Presentation Time:Thursday, 10 June, 14:00 - 14:45
Presentation Poster
Topic Machine Learning for Signal Processing: [MLR-LMM] Learning from multimodal data
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Virtual Presentation  Click here to watch in the Virtual Conference
Abstract Visual Question Answering (VQA) is a challenging task which requires a fine-grained semantic understanding of visual and textual contents. Existing works focus on better modality representations. However, these methods give little consideration to the long-tailed data distribution in common VQA datasets. The extreme class imbalance causes training bias to behave well in head class, but fail in tail class. Therefore, we propose a unified Adaptive Re-balancing Network (ARN) to take care of classification in both head and tail classes, exhaustively improving performance for VQA. Specifically, two training branches are introduced to perform their own duty iteratively, which learn the universal representations first and then emphasize the tail data progressively by the re-balancing branch with adaptive learning. Meanwhile, contextual information in the question is vital for guiding accurate visual attention. Thus our network is further equipped with a novel gate mechanism to give higher weight to contextual information. The Experimental results on common benchmarks such as VQA-v2 have demonstrated the superiority of our method compared with state of the art.