2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDAUD-9.6
Paper Title STATISTICAL CORRECTION OF TRANSCRIBED MELODY NOTES BASED ON PROBABILISTIC INTEGRATION OF A MUSIC LANGUAGE MODEL AND A TRANSCRIPTION ERROR MODEL
Authors Yuki Hiramatsu, Go Shibata, Ryo Nishikimi, Eita Nakamura, Kazuyoshi Yoshii, Kyoto University, Japan
SessionAUD-9: Music Information Retrieval and Music Language Processing 1: Beat and Melody
LocationGather.Town
Session Time:Wednesday, 09 June, 14:00 - 14:45
Presentation Time:Wednesday, 09 June, 14:00 - 14:45
Presentation Poster
Topic Audio and Acoustic Signal Processing: [AUD-MIR] Music Information Retrieval and Music Language Processing
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract This paper describes a statistical post-processing method for automatic singing transcription that corrects pitch and rhythm errors included in a transcribed note sequence. Although the performance of frame-level pitch estimation has been improved drastically by deep learning techniques, note-level transcription of singing voice is still an open problem. Inspired by the standard framework of statistical machine translation, we formulate a hierarchical generative model of a transcribed note sequence that consists of a music language model describing the pitch and onset transitions of a true note sequence and a transcription error model describing the addition of deletion, insertion, and substitution errors to the true sequence. Because the length of the true sequence might be different from that of the observed transcribed sequence, the most likely sequences with possible different lengths are estimated with Viterbi decoding and the most likely length is then selected with a sophisticated language model based on a long short-term memory (LSTM) network. The experimental results show that the proposed method can correct musically unnatural transcription errors.