2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDIVMSP-34.5
Paper Title SELF-SUPERVISED DEPTH ESTIMATION VIA IMPLICIT CUES FROM VIDEOS
Authors Jianrong Wang, Ge Zhang, Zhenyu Wu, Xuewei Li, College of Intelligence and Computing, Tianjin University, China; Li Liu, Shenzhen Research Institute of Big Data, the Chinese University of Hong Kong, Shenzhen, China
SessionIVMSP-34: Inpaiting and Occlusions Handling
LocationGather.Town
Session Time:Friday, 11 June, 14:00 - 14:45
Presentation Time:Friday, 11 June, 14:00 - 14:45
Presentation Poster
Topic Image, Video, and Multidimensional Signal Processing: [IVTEC] Image & Video Processing Techniques
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract In self-supervised monocular depth estimation, the depth discontinuity and motion objects’ artifacts are still challenging problems. Existing self-supervised methods usually utilize two views to train the depth estimation network and use a single view to make predictions. Compared with static views, abundant dynamic properties between video frames are beneficial to refine depth estimation, especially for dynamic objects. In this work, we improve the self-supervised learning framework for depth estimation using consecutive frames from monocular and stereo videos. The main idea is to exploit an implicit depth cue extractor which leverages dynamic and static cues to generate useful depth proposals. These cues can predict distinguishable motion contours and geometric scene structures. Moreover, a new high-dimensional attention module is proposed to extract a clear global transformation, which effectively suppresses uncertainty of local descriptors in high-dimensional space, resulting in a more reliable optimization in the learning framework. Experiments demonstrate that the proposed framework outperforms the state-ofthe-art on KITTI and Make3D datasets.