2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDIVMSP-32.3
Paper Title ROBUST SPATIAL-TEMPORAL CORRELATION MODEL FOR BACKGROUND INITIALIZATION IN SEVERE SCENE
Authors Yuheng Deng, Wenjun Zhou, Bo Peng, Southwest Petroleum University, China; Dong Liang, Nanjing University of Aeronautics and Astronautics, China; Shun'ichi Kaneko, Hokkaido University, Japan
SessionIVMSP-32: Applications 4
LocationGather.Town
Session Time:Friday, 11 June, 14:00 - 14:45
Presentation Time:Friday, 11 June, 14:00 - 14:45
Presentation Poster
Topic Image, Video, and Multidimensional Signal Processing: [IVTEC] Image & Video Processing Techniques
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Scene background initialization is an important step as one low-layer method for high-layer applications in computer vision. However, this process is always affected by practical challenges such as illumination changes, background motion, camera jitter, intermittent movement and bad weather outdoors, etc. In this work, we develop a novel method called co-occurrence pixel-block (CPB) model via spatial-temporal correlation for robust background initialization. This work first introduces the CPB method for foreground extraction. And then, background information in spatial-temporal features are utilized to recover an adaptive background for the current frame. Experimental results obtained from the dataset of the challenging benchmark (SBMnet) validate it’s performance under various challenges.