Existing models based on sensor data for human activity recognition are reporting state-of-the-art performances. Most of these models are conducted based on single-domain learning in which for each domain a model is required to be trained. However, the generation of adequate labelled data and a learning model for each domain separately is often time-consuming and computationally expensive. Moreover, the deployment of multiple domain-wise models is not scalable as it obscures domain distinctions, introduces extra computational costs, and limits the usefulness of training data. To mitigate this, we propose a multi-domain learning network to transfer knowledge across different but related domains and alleviate isolated learning paradigms using a shared representation. The proposed network consists of two identical causal convolutional sub-networks that are projected to a shared representation followed by a linear attention mechanism. The proposed network can be trained using the full training dataset of the source domain and a dataset of restricted size of the target training domain to reduce the need of large labelled training datasets. The network processes the source and target domains jointly to learn powerful and mutually complementary features to boost the performance in both domains. The proposed multi-domain learning network on six real-world sensor activity datasets outperforms the existing methods by applying only 50% of the labelled data. This confirms the efficacy of the proposed approach as a generic model to learn human activities from different but related domains in a joint effort, to reduce the number of required models and thus improve system efficiency.
©2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.