Home > Research > Publications & Outputs > A spatial attentive and temporal dilated (SATD)...

Links

Text available via DOI:

View graph of relations

A spatial attentive and temporal dilated (SATD) GCN for skeleton-based action recognition

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
  • J. Zhang
  • G. Ye
  • Z. Tu
  • Y. Qin
  • Q. Qin
  • J. Zhang
  • Jun Liu
Close
<mark>Journal publication date</mark>31/03/2022
<mark>Journal</mark>CAAI Transactions on Intelligence Technology
Issue number1
Volume7
Number of pages10
Pages (from-to)46-55
Publication StatusPublished
Early online date17/03/21
<mark>Original language</mark>English

Abstract

Current studies have shown that the spatial-temporal graph convolutional network (ST-GCN) is effective for skeleton-based action recognition. However, for the existing ST-GCN-based methods, their temporal kernel size is usually fixed over all layers, which makes them cannot fully exploit the temporal dependency between discontinuous frames and different sequence lengths. Besides, most of these methods use average pooling to obtain global graph feature from vertex features, resulting in losing much fine-grained information for action classification. To address these issues, in this work, the authors propose a novel spatial attentive and temporal dilated graph convolutional network (SATD-GCN). It contains two important components, that is, a spatial attention pooling module (SAP) and a temporal dilated graph convolution module (TDGC). Specifically, the SAP module can select the human body joints which are beneficial for action recognition by a self-attention mechanism and alleviates the influence of data redundancy and noise. The TDGC module can effectively extract the temporal features at different time scales, which is useful to improve the temporal perception field and enhance the robustness of the model to different motion speed and sequence length. Importantly, both the SAP module and the TDGC module can be easily integrated into the ST-GCN-based models, and significantly improve their performance. Extensive experiments on two large-scale benchmark datasets, that is, NTU-RGB + D and Kinetics-Skeleton, demonstrate that the authors’ method achieves the state-of-the-art performance for skeleton-based action recognition.