Home > Research > Publications & Outputs > ConvNet-based performers attention and supervis...

Links

Text available via DOI:

View graph of relations

ConvNet-based performers attention and supervised contrastive learning for activity recognition

Research output: Contribution to Journal/MagazineJournal articlepeer-review

E-pub ahead of print
  • R.A. Hamad
  • Longzhi Yang
  • W.L. Woo
  • B. Wei
Close
<mark>Journal publication date</mark>3/08/2022
<mark>Journal</mark>Applied Intelligence
Publication StatusE-pub ahead of print
Early online date3/08/22
<mark>Original language</mark>English

Abstract

Human activity recognition based on generated sensor data plays a major role in a large number of applications such as healthcare monitoring and surveillance system. Yet, accurately recognizing human activities is still challenging and active research due to people’s tendency to perform daily activities in a different and multitasking way. Existing approaches based on the recurrent setting for human activity recognition have some issues, such as the inability to process data parallelly, the requirement for more memory and high computational cost albeit they achieved reasonable results. Convolutional Neural Network processes data parallelly, but, it breaks the ordering of input data, which is significant to build an effective model for human activity recognition. To overcome these challenges, this study proposes causal convolution based on performers-attention and supervised contrastive learning to entirely forego recurrent architectures, efficiently maintain the ordering of human daily activities and focus more on important timesteps of the sensors’ data. Supervised contrastive learning is integrated to learn a discriminative representation of human activities and enhance predictive performance. The proposed network is extensively evaluated for human activities using multiple datasets including wearable sensor data and smart home environments data. The experiments on three wearable sensor datasets and five smart home public datasets of human activities reveal that our proposed network achieves better results and reduces the training time compared with the existing state-of-the-art methods and basic temporal models. © 2022, The Author(s).