Home > Research > Publications & Outputs > Human Action Recognition from Various Data Moda...

Electronic data

  • 2012.11866

    Rights statement: ©2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Accepted author manuscript, 3.13 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Human Action Recognition from Various Data Modalities: A Review

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Human Action Recognition from Various Data Modalities: A Review. / Sun, Zehua; Ke, Qiuhong; Rahmani, Hossein et al.
In: IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 45, No. 3, 3, 01.03.2023, p. 3200-3225.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Sun, Z, Ke, Q, Rahmani, H, Bennamoun, M, Wang, G & Liu, J 2023, 'Human Action Recognition from Various Data Modalities: A Review', IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 3, 3, pp. 3200-3225. https://doi.org/10.1109/TPAMI.2022.3183112

APA

Sun, Z., Ke, Q., Rahmani, H., Bennamoun, M., Wang, G., & Liu, J. (2023). Human Action Recognition from Various Data Modalities: A Review. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(3), 3200-3225. Article 3. https://doi.org/10.1109/TPAMI.2022.3183112

Vancouver

Sun Z, Ke Q, Rahmani H, Bennamoun M, Wang G, Liu J. Human Action Recognition from Various Data Modalities: A Review. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2023 Mar 1;45(3):3200-3225. 3. Epub 2022 Jun 14. doi: 10.1109/TPAMI.2022.3183112

Author

Sun, Zehua ; Ke, Qiuhong ; Rahmani, Hossein et al. / Human Action Recognition from Various Data Modalities : A Review. In: IEEE Transactions on Pattern Analysis and Machine Intelligence. 2023 ; Vol. 45, No. 3. pp. 3200-3225.

Bibtex

@article{18cc32c99c334b1586cda4fc4f909aae,
title = "Human Action Recognition from Various Data Modalities: A Review",
abstract = "Human Action Recognition (HAR) aims to understand human behavior and assign a label to each action. It has a wide range of applications, and therefore has been attracting increasing attention in the field of computer vision. Human actions can be represented using various data modalities, such as RGB, skeleton, depth, infrared, point cloud, event stream, audio, acceleration, radar, and WiFi signal, which encode different sources of useful yet distinct information and have various advantages depending on the application scenarios. Consequently, lots of existing works have attempted to investigate different types of approaches for HAR using various modalities. In this paper, we present a comprehensive survey of recent progress in deep learning methods for HAR based on the type of input data modality. Specifically, we review the current mainstream deep learning methods for single data modalities and multiple data modalities, including the fusion-based and the co-learning-based frameworks. We also present comparative results on several benchmark datasets for HAR, together with insightful observations and inspiring future research directions.",
author = "Zehua Sun and Qiuhong Ke and Hossein Rahmani and Mohammed Bennamoun and Gang Wang and Jun Liu",
note = "{\textcopyright}2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.",
year = "2023",
month = mar,
day = "1",
doi = "10.1109/TPAMI.2022.3183112",
language = "English",
volume = "45",
pages = "3200--3225",
journal = "IEEE Transactions on Pattern Analysis and Machine Intelligence",
issn = "0162-8828",
publisher = "IEEE Computer Society",
number = "3",

}

RIS

TY - JOUR

T1 - Human Action Recognition from Various Data Modalities

T2 - A Review

AU - Sun, Zehua

AU - Ke, Qiuhong

AU - Rahmani, Hossein

AU - Bennamoun, Mohammed

AU - Wang, Gang

AU - Liu, Jun

N1 - ©2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

PY - 2023/3/1

Y1 - 2023/3/1

N2 - Human Action Recognition (HAR) aims to understand human behavior and assign a label to each action. It has a wide range of applications, and therefore has been attracting increasing attention in the field of computer vision. Human actions can be represented using various data modalities, such as RGB, skeleton, depth, infrared, point cloud, event stream, audio, acceleration, radar, and WiFi signal, which encode different sources of useful yet distinct information and have various advantages depending on the application scenarios. Consequently, lots of existing works have attempted to investigate different types of approaches for HAR using various modalities. In this paper, we present a comprehensive survey of recent progress in deep learning methods for HAR based on the type of input data modality. Specifically, we review the current mainstream deep learning methods for single data modalities and multiple data modalities, including the fusion-based and the co-learning-based frameworks. We also present comparative results on several benchmark datasets for HAR, together with insightful observations and inspiring future research directions.

AB - Human Action Recognition (HAR) aims to understand human behavior and assign a label to each action. It has a wide range of applications, and therefore has been attracting increasing attention in the field of computer vision. Human actions can be represented using various data modalities, such as RGB, skeleton, depth, infrared, point cloud, event stream, audio, acceleration, radar, and WiFi signal, which encode different sources of useful yet distinct information and have various advantages depending on the application scenarios. Consequently, lots of existing works have attempted to investigate different types of approaches for HAR using various modalities. In this paper, we present a comprehensive survey of recent progress in deep learning methods for HAR based on the type of input data modality. Specifically, we review the current mainstream deep learning methods for single data modalities and multiple data modalities, including the fusion-based and the co-learning-based frameworks. We also present comparative results on several benchmark datasets for HAR, together with insightful observations and inspiring future research directions.

U2 - 10.1109/TPAMI.2022.3183112

DO - 10.1109/TPAMI.2022.3183112

M3 - Journal article

VL - 45

SP - 3200

EP - 3225

JO - IEEE Transactions on Pattern Analysis and Machine Intelligence

JF - IEEE Transactions on Pattern Analysis and Machine Intelligence

SN - 0162-8828

IS - 3

M1 - 3

ER -