Home > Research > Publications & Outputs > SMAM: Self and Mutual Adaptive Matching for Ske...

Links

Text available via DOI:

View graph of relations

SMAM: Self and Mutual Adaptive Matching for Skeleton-Based Few-Shot Action Recognition

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
  • Zhiheng Li
  • Xuyuan Gong
  • Ran Song
  • Peng Duan
  • Jun Liu
  • Wei Zhang
Close
<mark>Journal publication date</mark>31/12/2023
<mark>Journal</mark>IEEE Transactions on Image Processing
Volume32
Number of pages11
Pages (from-to)392-402
Publication StatusPublished
Early online date7/12/22
<mark>Original language</mark>English

Abstract

This paper focuses on skeleton-based few-shot action recognition. Since skeleton is essentially a sparse representation of human action, the feature maps extracted from it, through a standard encoder network in the few-shot condition, may not be sufficiently discriminative for some action sequences that look partially similar to each other. To address this issue, we propose a self and mutual adaptive matching (SMAM) module to convert such feature maps into more discriminative feature vectors. Our method, named as SMAM-Net, first leverages both the temporal information associated with each individual skeleton joint and the spatial relationship among them for feature extraction. Then, the SMAM module adaptively measures the similarity between labeled and query samples and further carries out feature matching within the query set to distinguish similar skeletons of various action categories. Experimental results show that the SMAM-Net outperforms other baselines on the large-scale NTU RGB + D 120 dataset in the tasks of one-shot and five-shot action recognition. We also report our results on smaller datasets including NTU RGB + D 60, SYSU and PKU-MMD to demonstrate that our method is reliable and generalises well on different datasets. Codes and the pretrained SMAM-Net will be made publicly available.