2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDMMSP-3.6
Paper Title DISENTANGLING SUBJECT-DEPENDENT/-INDEPENDENT REPRESENTATIONS FOR 2D MOTION RETARGETING
Authors Fanglu Xie, Go Irie, Tatsushi Matsubayashi, Nippon Telegraph and Telephone Corporation, Japan
SessionMMSP-3: Multimedia Synthesis and Enhancement
LocationGather.Town
Session Time:Wednesday, 09 June, 14:00 - 14:45
Presentation Time:Wednesday, 09 June, 14:00 - 14:45
Presentation Poster
Topic Multimedia Signal Processing: Signal Processing for Multimedia Applications
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract We consider the problem of 2D motion retargeting, which is to transfer the motion of one 2D skeleton to another skeleton of a different body shape. Existing methods decompose the input motion skeleton into dynamic (motion) and static (body shape, viewpoint angle, and emotion) features and synthesize a new skeleton by mixing up the features extracted from the different data. However, the resulting motion skeletons do not reflect subject-dependent factors that can stylize motion, such as skill and expressions, leading to unattractive results. In this work, we propose a novel network to separate subject-dependent and -independent motion features and to reconstruct a new skeleton with or without subject-dependent motion features. The core of our approach is adversarial feature disentanglement. The motion features and a subject classifier are trained simultaneously such that subject-dependent motion features do allow for between-subject discrimination, whereas subject-independent features cannot. The presence or absence of individuality is readily controlled by a simple summation of the motion features. Our method shows superior performance to the state-of-the-art method in terms of reconstruction error and can generate new skeletons while maintaining individuality.