2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDBIO-11.3
Paper Title Decoding neural representations of rhythmic sounds from magnetoencephalography
Authors Pei-Chun Chang, Jia-Ren Chang, Po-Yu Chen, Li-Kai Cheng, Jen-Chuen Hsieh, National Yang Ming Chiao Tung University, Taiwan; Hsin-Yen Yu, Taipei National University of the Arts, Taiwan; Li-Fen Chen, Yong-Sheng Chen, National Yang Ming Chiao Tung University, Taiwan
SessionBIO-11: Deep Learning for Physiological Signals
LocationGather.Town
Session Time:Thursday, 10 June, 13:00 - 13:45
Presentation Time:Thursday, 10 June, 13:00 - 13:45
Presentation Poster
Topic Biomedical Imaging and Signal Processing: [BIO] Biomedical signal processing
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Neuroscience studies have revealed neural processes involving rhythm perception, suggesting that brain encodes rhythmic sounds and embeds information in neural activity. In this work, we investigate how to extract rhythmic information embedded in the brain responses and to decode the original audio waveforms from the extracted information. A spatiotemporal convolutional neural network is adopted to extract compact rhythm-related representations from the noninvasively measured magnetoencephalographic (MEG) signals evoked by listening to rhythmic sounds. These learned MEG representations are then used to condition an audio generator network for the synthesis of the original rhythmic sounds. In the experiments, we evaluated the proposed method by using the MEG signals recorded from eight participants and demonstrated that the generated rhythms are highly related to those evoking the MEG signals. Interestingly, we found that the auditory-related MEG channels reveal high importance in en- coding rhythmic representations, the distribution of these representations relate to the timing of beats, and the behavior performance is consistent with the performance of neural decoding. These results suggest that the proposed method can synthesize rhythms by decoding neural representations from MEG.