2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDHLT-9.2
Paper Title ALIGNING THE TRAINING AND EVALUATION OF UNSUPERVISED TEXT STYLE TRANSFER
Authors Wanhui Qian, Fuqing Zhu, Jinzhu Yang, Jizhong Han, Songlin Hu, Institute of Information Engineering, Chinese Academy of Sciences, China
SessionHLT-9: Style and Text Normalization
LocationGather.Town
Session Time:Wednesday, 09 June, 16:30 - 17:15
Presentation Time:Wednesday, 09 June, 16:30 - 17:15
Presentation Poster
Topic Human Language Technology: [HLT-MLMD] Machine Learning Methods for Language
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract In the text style transfer task, models modify the attribute style of given texts while keeping the style-irrelevant content unchanged. Previous work has proposed many approaches on the non-parallel corpus (without style-to-style training pairs). These approaches are mostly motivated by heuristic intuition and fail to precisely control texts' attributes, such as the amount of preserved semantics, which leaves discrepancies between training and evaluation. This paper proposes a novel training method based on the evaluation metrics to address the discrepancy issue. Specifically, the model first evaluates different aspects of the transferred texts and provides the differentiable quality approximations by employing extra supervising modules. Then the model is optimized by bridging the gap between approximations and expectations. Extensive experiments conducted on two sentiment style datasets demonstrate the effectiveness of our proposal compared with some competitive baselines.