Home > Research > Publications & Outputs > Fusion of Multimodal Spatio-Temporal Features a...

Associated organisational unit

Links

Text available via DOI:

View graph of relations

Fusion of Multimodal Spatio-Temporal Features and 3D Deformable Convolution Based on Sign Language Recognition in Sensor Networks

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
  • Qian Zhou
  • Hui Li
  • Weizhi Meng
  • Hua Dai
  • Tianyu Zhou
  • Guineng Zheng
Close
Article number4378
<mark>Journal publication date</mark>13/07/2025
<mark>Journal</mark>Sensors
Issue number14
Volume25
Publication StatusPublished
<mark>Original language</mark>English

Abstract

Sign language is a complex and dynamic visual language that requires the coordinated movement of various body parts, such as the hands, arms, and limbs—making it an ideal application domain for sensor networks to capture and interpret human gestures accurately. To address the intricate task of precise and expedient SLR from raw videos, this study introduces a novel deep learning approach by devising a multimodal framework for SLR. Specifically, feature extraction models are built based on two modalities: skeleton and RGB images. In this paper, we firstly propose a Multi-Stream Spatio-Temporal Graph Convolutional Network (MSGCN) that relies on three modules: a decoupling graph convolutional network, a self-emphasizing temporal convolutional network, and a spatio-temporal joint attention module. These modules are combined to capture the spatio-temporal information in multi-stream skeleton features. Secondly, we propose a 3D ResNet model based on deformable convolution (D-ResNet) to model complex spatial and temporal sequences in the original raw images. Finally, a gating mechanism-based Multi-Stream Fusion Module (MFM) is employed to merge the results of the two modalities. Extensive experiments are conducted on the public datasets AUTSL and WLASL, achieving competitive results compared to state-of-the-art systems.