2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDBIO-10.2
Paper Title Deep Multiway Canonical Correlation Analysis for Multi-subject EEG Normalization
Authors Jaswanth Reddy Katthi, Sriram Ganapathy, Indian Institute of Science, India
SessionBIO-10: Deep Learning for EEG Analysis
LocationGather.Town
Session Time:Thursday, 10 June, 13:00 - 13:45
Presentation Time:Thursday, 10 June, 13:00 - 13:45
Presentation Poster
Topic Biomedical Imaging and Signal Processing: [BIO] Biomedical signal processing
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract The normalization of brain recordings from multiple subjects responding to the natural stimuli is one of the key challenges in auditory neuroscience. The objective of this normalization is to transform the brain data in such a way as to remove the inter-subject redundancies and to boost the component related to the stimuli. In this paper, we propose a deep learning framework to improve the correlation of electroencephalography (EEG) data recorded from multiple subjects engaged in an audio listening task. The proposed model extends the linear multi-way canonical correlation analysis (CCA) for audio-EEG analysis using an auto-encoder network with a shared encoder layer. The model is trained to optimize a combined loss involving correlation and reconstruction. The experiments are performed on EEG data collected from subjects listening to natural speech and music. In these experiments, we show that the proposed deep multi-way CCA (DMCCA) based model significantly improves the correlations over the linear multi-way CCA approach with absolute improvements of $0.08$ and $0.29$ in terms of the Pearson correlation values for speech and music tasks respectively.