Home > Research > Publications & Outputs > Cross-Domain Activity Recognition Using Shared ...

Electronic data

  • FINAL_VERSION (1)

    Rights statement: ©2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Accepted author manuscript, 1.17 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Cross-Domain Activity Recognition Using Shared Representation in Sensor Data

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
  • Rebeen Ali Hamad
  • Longzhi Yang
  • Wai Lok Woo
  • Bo Wei
Close
<mark>Journal publication date</mark>1/07/2022
<mark>Journal</mark>IEEE Sensors Journal
Issue number13
Volume22
Number of pages12
Pages (from-to)13273-13284
Publication StatusPublished
Early online date2/06/22
<mark>Original language</mark>English

Abstract

Existing models based on sensor data for human activity recognition are reporting state-of-the-art performances. Most of these models are conducted based on single-domain learning in which for each domain a model is required to be trained. However, the generation of adequate labelled data and a learning model for each domain separately is often time-consuming and computationally expensive. Moreover, the deployment of multiple domain-wise models is not scalable as it obscures domain distinctions, introduces extra computational costs, and limits the usefulness of training data. To mitigate this, we propose a multi-domain learning network to transfer knowledge across different but related domains and alleviate isolated learning paradigms using a shared representation. The proposed network consists of two identical causal convolutional sub-networks that are projected to a shared representation followed by a linear attention mechanism. The proposed network can be trained using the full training dataset of the source domain and a dataset of restricted size of the target training domain to reduce the need of large labelled training datasets. The network processes the source and target domains jointly to learn powerful and mutually complementary features to boost the performance in both domains. The proposed multi-domain learning network on six real-world sensor activity datasets outperforms the existing methods by applying only 50% of the labelled data. This confirms the efficacy of the proposed approach as a generic model to learn human activities from different but related domains in a joint effort, to reduce the number of required models and thus improve system efficiency.

Bibliographic note

©2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.