2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDAUD-22.5
Paper Title Audio-Visual Event Recognition through the lens of Adversary
Authors Juncheng Li, Kaixin Ma, Carnegie Mellon University, United States; Shuhui Qu, Stanford University, United States; Po-Yao Huang, Florian Metze, Carnegie Mellon University, United States
SessionAUD-22: Detection and Classification of Acoustic Scenes and Events 3: Multimodal Scenes and Events
LocationGather.Town
Session Time:Thursday, 10 June, 15:30 - 16:15
Presentation Time:Thursday, 10 June, 15:30 - 16:15
Presentation Poster
Topic Audio and Acoustic Signal Processing: [AUD-CLAS] Detection and Classification of Acoustic Scenes and Events
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract As audio/visual classification models are widely deployed for sensitive tasks like content filtering at scale, therefore, it is critical to understand their robustness. This work aims to study several key issues related to multimodal learning through the lens of adversarial noises: 1) The trade-off between early/late fusion in terms of robustness 2) How does different frequency/time domain feature contribute to the robustness? 3) How does different neural modules contribute against the adversarial noise? In our experiment, we construct adversarial examples to attack state-of-the-art neural models trained on Google AudioSet and analyzed how much attack potency in terms of $\epsilon$ using different $L_p$ norms we would need to ``deactivate" the victim model. Using adversarial noise to dissect multi-modal models, we are able to provide an insight into what could be the best fusion strategy to balance the model parameters/accuracy and robustness trade-off and distinguish the robust features versus the non-robust features that various neural networks tend to learn.