2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

Technical Program

Paper Detail

Paper IDIVMSP-5.5
Paper Title LIGHTWEIGHT NON-LOCAL NETWORK FOR IMAGE SUPER-RESOLUTION
Authors Risheng Wang, Tao Lei, Wenzheng Zhou, Shaanxi University of Science and Technology, China; Qi Wang, Northwestern Polytechnical University, China; Hongying Meng, Asoke K. Nandi, Brunel University London, United Kingdom
SessionIVMSP-5: Super-resolution 1
LocationGather.Town
Session Time:Tuesday, 08 June, 16:30 - 17:15
Presentation Time:Tuesday, 08 June, 16:30 - 17:15
Presentation Poster
Topic Image, Video, and Multidimensional Signal Processing: [IVTEC] Image & Video Processing Techniques
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Virtual Presentation  Click here to watch in the Virtual Conference
Abstract The popular deep convolutional networks used for image super-resolution (SR) reconstruction often increase the network depth and employ attention mechanism to improve image reconstruction effect. However, these networks suffer from two problems. The first is the deeper network easily causes higher computational cost and more GPU memory usage. The second is traditional attention mechanism often misses the spatial information of images leading the loss of image detail information. To address these issues, we propose a lightweight non-local network (LNLN) for image super resolution in this paper. The proposed network makes two contributions. First, we use non-local module instead of normal attention module to obtain larger receptive field and extract more comprehensive feature information, which is helpful for improving image SR reconstruction results. Secondly, we use the depthwise separable convolution (DSC) instead of the vanilla convolution to reconstruct the residual block, which greatly reduces the number of parameters and computational cost. The proposed LNLN and comparative networks are evaluated on five commonly public datasets, and experiments demonstrate that the proposed LNLN is superior to state-of-the-art networks in terms of reconstruction performance, the number of parameters and storage space.