2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDIVMSP-28.3
Paper Title DP-VTON: TOWARD DETAIL-PRESERVING IMAGE-BASED VIRTUAL TRY-ON NETWORK
Authors Yuan Chang, Tao Peng, Ruhan He, Xinrong Hu, Junping Liu, Zili Zhang, Minghua Jiang, Wuhan Textile University, China
SessionIVMSP-28: Image Synthesis
LocationGather.Town
Session Time:Friday, 11 June, 11:30 - 12:15
Presentation Time:Friday, 11 June, 11:30 - 12:15
Presentation Poster
Topic Image, Video, and Multidimensional Signal Processing: [IVELI] Electronic Imaging
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Image-based virtual try-on systems with the goal of transferring a target clothing item onto the corresponding region of a person have received great attention recently. However, it is still a challenge for the existing methods to generate photo-realistic try-on images while preserving non-target details (Fig. 1). To resolve this issues, we present a novel virtual try-on network, DP-VTON. First, a clothing warping module combines pixel transformation with feature transformation to transform the target clothing. Second, a semantic segmentation prediction module predicts a semantic segmentation map of the person wearing the target clothing. Third, an arm generation module generates arms of the reference image that will be changed after try-on. Finally, the warped clothing, semantic segmentation map, arms image and other non-target details (e.g. face, hair, bottom clothes) are fused together for try-on image synthesis. Extensive experiments demonstrate our system achieves the state-of-the art virtual try-on performance both qualitatively and quantitatively.