2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDIVMSP-34.2
Paper Title BISHIFT-NET FOR IMAGE INPAINTING
Authors Xue Zhou, Tao Dai, Yong Jiang, Shutao Xia, Tsinghua University, China
SessionIVMSP-34: Inpaiting and Occlusions Handling
LocationGather.Town
Session Time:Friday, 11 June, 14:00 - 14:45
Presentation Time:Friday, 11 June, 14:00 - 14:45
Presentation Poster
Topic Image, Video, and Multidimensional Signal Processing: [IVSMR] Image & Video Sensing, Modeling, and Representation
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Image inpainting remains a challenging task in computer vision which aims to fill in the missing area of a corrupted image with proper contents and generate photorealistic images by using the information from the existing area. Most existing methods always generate contents with blurry texture caused by propogating the convolutional feature through a fully connected layer. To address this problem, Shift-Net is proposed to shift the encoder feature from existing area to serve as an estimation of the missing parts. However, the decoder feature with new encoding information is ingnored. Inspired by this, we propose a new inpainting model, which is called BiShift-Net. BiShift-Net adopts the structure of U-Net, and we introduce a BiShift layer to it. We use the BiShift layer to capture the information from both encoder and decoder features, rearranging the features to generate sharp texture. Experiments show that Bishift-Net outperforms the other stateof-the-art CNN-based methods, while produce more faithful results at the same time.