2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDBIO-12.4
Paper Title Human-centered Favorite Music Classification Using EEG-based Individual Music Preference via Deep Time-series CCA
Authors Ryosuke Sawata, Graduate School of Information Science and Technology, Hokkaido University, Japan; Takahiro Ogawa, Miki Haseyama, Faculty of Information Science and Technology, Hokkaido University, Japan
SessionBIO-12: Feature Extraction and Fusion for Biomedical Applications
LocationGather.Town
Session Time:Friday, 11 June, 11:30 - 12:15
Presentation Time:Friday, 11 June, 11:30 - 12:15
Presentation Poster
Topic Biomedical Imaging and Signal Processing: [BIO] Biomedical signal processing
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract A method to classify a user's like or dislike musical pieces based on the extraction of his or her music preference is proposed in this paper. New scheme of Canonical Correlation Analysis (CCA), called Deep Time-series DTCCA, which can consider the correlation between two sets of input features with considering the time-series relation lurked in each input data is exploited to realize the aforementioned classification. One of the most difference between DTCCA and existing other CCAs is enabling to consider the above time-series relation, and thus DTCCA make the individual electroencephalogram (EEG)-based favorite music classification more effective than the methods using one of other CCAs instead of DTCCA since EEG and audio signals are respectively time-series data. Experimental results show that DTCCA-based favorite music classification outperformed not only method using original features without CCA but also methods using other existing CCAs including even state-of-the-art CCA.