2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDBIO-1.6
Paper Title A DEEP SPATIO-TEMPORAL MODEL FOR EEG-BASED IMAGINED SPEECH RECOGNITION
Authors Pradeep Kumar, Erik Scheme, University of New Brunswick, Canada
SessionBIO-1: Brain-Computer Interfaces
LocationGather.Town
Session Time:Tuesday, 08 June, 13:00 - 13:45
Presentation Time:Tuesday, 08 June, 13:00 - 13:45
Presentation Poster
Topic Biomedical Imaging and Signal Processing: [BIO-BCI] Brain/human-computer interfaces
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Automatic speech recognition interfaces are becoming increasingly pervasive in daily life as a means of interacting with and controlling electronic devices. Current speech interfaces, however, are infeasible for a variety of users and use cases, such as patients who suffer from locked-in syndrome or those who need privacy. In these case, an interface that works based on envisioned speech, the idea of imagining what one wants to say, could be of benefit. Consequently, in this work, we propose an imagined speech Brain-Computer-Interface (BCI) using Electroencephalogram (EEG) signals. EEG signals are processed using a deep spatio-temporal learning architecture with 1D Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM), respectively. LSTM units are implemented in a many-to-many fashion to produce a time series of imagined speech outputs. Using this series, the performance of the system is boosted using majority vote post-processing to further improve results. The performance is evaluated on two publicly available datasets; one to test the performance of the tuned model, and another to test its generalization to a new dataset. The proposed architecture outperforms previous results with improvements of up to 23.7%.