2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDHLT-10.3
Paper Title TRIPLE SEQUENCE GENERATIVE ADVERSARIAL NETS FOR UNSUPERVISED IMAGE CAPTIONING
Authors Yucheng Zhou, Wei Tao, Wenqiang Zhang, Fudan University, China
SessionHLT-10: Multi-modality in Language
LocationGather.Town
Session Time:Wednesday, 09 June, 16:30 - 17:15
Presentation Time:Wednesday, 09 June, 16:30 - 17:15
Presentation Poster
Topic Human Language Technology: [HLT-MMPL] Multimodal Processing of Language
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Labelling image-sentence is expensive and some unsupervised image captioning methods show promising results on caption generation. However, the generated captions are not very relevant to images due to the excessive dependence on the corpus. In order to overcome that drawback, we focus on the correspondence between image and sentence to construct an image caption with better mapping relation. In this paper, we present a novel triple sequence generative adversarial net including an image generator, a discriminator, and a sentence generator. The image generator is used to generate the image regions for words. Meanwhile, the sentence corpus guides the sentence generator based on the generated image regions. The discriminator judges the relevance between the words in the sentence and the generated image regions. In the experiments, we use a large number of unpaired images and sentences to train our model on the unsupervised and unpaired setting. The experimental results demonstrate that our method achieves significant improvements as compared to all baselines.