2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDSS-2.4
Paper Title A PLUG-AND-PLAY DEEP IMAGE PRIOR
Authors Zhaodong Sun, Fabian Latorre, Thomas Sanchez, Volkan Cevher, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland
SessionSS-2: Deep Learning Methods for Solving Linear Inverse Problems
LocationGather.Town
Session Time:Tuesday, 08 June, 14:00 - 14:45
Presentation Time:Tuesday, 08 June, 14:00 - 14:45
Presentation Poster
Topic Special Sessions: Deep Learning Methods for Solving Linear Inverse Problems
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Deep image priors (DIP) offer a novel approach for the regularization that leverages the inductive bias of a deep convolutional architecture in inverse problems. However, the quality of DIP approaches often degrades when the number of iterations exceeds a certain threshold due to overfitting. To mitigate this effect, this work incorporates a plug-and-play prior scheme which can accommodate additional regularization steps within a DIP framework. Our modification is achieved using an augmented Lagrangian formulation of the problem, and is solved using an Alternating Direction Method of Multipliers (ADMM) variant, which can capture existing DIP approaches as a special case. We show experimentally that our ADMM-based DIP pairing outperforms competitive baselines in PSNR while exhibiting less overfitting.