2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDIVMSP-28.2
Paper Title RANGE GUIDED DEPTH REFINEMENT AND UNCERTAINTY-AWARE AGGREGATION FOR VIEW SYNTHESIS
Authors Yuan Chang, Yisong Chen, Guoping Wang, Peking University, China
SessionIVMSP-28: Image Synthesis
LocationGather.Town
Session Time:Friday, 11 June, 11:30 - 12:15
Presentation Time:Friday, 11 June, 11:30 - 12:15
Presentation Poster
Topic Image, Video, and Multidimensional Signal Processing: [IVARS] Image & Video Analysis, Synthesis, and Retrieval
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract In this paper, we present a framework of view synthesis, including range guided depth refinement and uncertainty-aware aggregation based novel view synthesis. We first propose a novel depth refinement method to improve the quality and robustness of the depth map reconstruction. To that end, we use a range prior to constrain the estimated depth, which helps us to get more accurate depth information. Then we propose an uncertainty-aware aggregation method for novel view synthesis. We compute the uncertainty of the estimated depth for each pixel, and reduce the influence of pixels whose uncertainty are large when synthesizing novel views. This step helps to reduce some artifacts such as ghost and blur. We validate the performance of our algorithm experimentally, and we show that our approach achieves state-of-the-art performance.