2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDIVMSP-33.4
Paper Title AN IMPROVED DEEP RELATION NETWORK FOR ACTION RECOGNITION IN STILL IMAGES
Authors Wei Wu, Jiale Yu, Inner Mongolia University, China
SessionIVMSP-33: Action Recognition
LocationGather.Town
Session Time:Friday, 11 June, 14:00 - 14:45
Presentation Time:Friday, 11 June, 14:00 - 14:45
Presentation Poster
Topic Image, Video, and Multidimensional Signal Processing: [IVSMR] Image & Video Sensing, Modeling, and Representation
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Contextual information has been widely utilized in visual recognition tasks. This is especially true for action recognition, because contextual information such as objects interacting with human and the scene where the action is performed is inseparable from action categories. To this end, we propose an efficient relation module that combines Human-Object and Scene-Object relations for action recognition. Specifically, Human-Object interaction submodule can capture more accurate appearance and spatial relation to build human-object interaction pairs. And Scene-Object interaction submodule can learn the probability of the objects involved in the scene to help discover the key interaction pair. We conduct extensive experiments on Stanford 40 and Pascal Voc 2012 Action datasets to verify our model, and experimental results show that our method achieves superior performance on these two datasets. Especially, we gain the best results on the Stanford 40 dataset compared with state-of-the-arts.