Home > Research > Publications & Outputs > IGFormer

Electronic data

  • ECCV2022_Camera_Ready

    Accepted author manuscript, 3.69 MB, PDF document


View graph of relations

IGFormer: Interaction Graph Transformer for Skeleton-based Human Interaction Recognition

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

E-pub ahead of print
Publication date27/10/2022
Host publicationEuropean Conference on Computer Vision (ECCV)
<mark>Original language</mark>English


Human interaction recognition is very important in many applications. One crucial cue in recognizing an interaction is the interactive body parts. In this work, we propose a novel Interaction Graph Transformer (IGFormer) network for skeleton-based interaction recognition via modeling the interactive body parts as graphs. More specifically, the proposed IGFormer constructs interaction graphs according to the semantic and distance correlations between the interactive body parts,
and enhances the representation of each person by aggregating the information
of the interactive body parts based on the learned graphs. Furthermore, we propose a Semantic Partition Module to transform each human skeleton sequence into a Body-Part-Time sequence to better capture the spatial and temporal information of the skeleton sequence for learning the graphs. Extensive experiments on three benchmark datasets demonstrate that our model outperforms the state-of-the-art with a significant margin.