Home > Research > Publications & Outputs > Human Action Recognition from Various Data Moda...

Electronic data

  • 2012.11866v2

    Final published version, 4.35 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Keywords

View graph of relations

Human Action Recognition from Various Data Modalities: A Review

Research output: Contribution to journalJournal articlepeer-review

Published

Standard

Human Action Recognition from Various Data Modalities : A Review. / Sun, Zehua; Liu, Jun; Ke, Qiuhong; Rahmani, Hossein; Bennamoun, Mohammed; Wang, Gang.

In: arXiv, 22.12.2020.

Research output: Contribution to journalJournal articlepeer-review

Harvard

APA

Sun, Z., Liu, J., Ke, Q., Rahmani, H., Bennamoun, M., & Wang, G. (2020). Human Action Recognition from Various Data Modalities: A Review. arXiv.

Vancouver

Sun Z, Liu J, Ke Q, Rahmani H, Bennamoun M, Wang G. Human Action Recognition from Various Data Modalities: A Review. arXiv. 2020 Dec 22.

Author

Sun, Zehua ; Liu, Jun ; Ke, Qiuhong ; Rahmani, Hossein ; Bennamoun, Mohammed ; Wang, Gang. / Human Action Recognition from Various Data Modalities : A Review. In: arXiv. 2020.

Bibtex

@article{bd38e25daa2f4522a778d436f0dc9b1b,
title = "Human Action Recognition from Various Data Modalities: A Review",
abstract = " Human Action Recognition (HAR), aiming to understand human behaviors and then assign category labels, has a wide range of applications, and thus has been attracting increasing attention in the field of computer vision. Generally, human actions can be represented using various data modalities, such as RGB, skeleton, depth, infrared sequence, point cloud, event stream, audio, acceleration, radar, and WiFi, etc., which encode different sources of useful yet distinct information and have various advantages and application scenarios. Consequently, lots of existing works have attempted to investigate different types of approaches for HAR using various modalities. In this paper, we give a comprehensive survey for HAR from the perspective of the input data modalities. Specifically, we review both the hand-crafted feature-based and deep learning-based methods for single data modalities, and also review the methods based on multiple modalities, including the fusion-based frameworks and the co-learning-based approaches. The current benchmark datasets for HAR are also introduced. Finally, we discuss some potentially important research directions in this area. ",
keywords = "cs.CV",
author = "Zehua Sun and Jun Liu and Qiuhong Ke and Hossein Rahmani and Mohammed Bennamoun and Gang Wang",
year = "2020",
month = dec,
day = "22",
language = "English",
journal = "arXiv",
issn = "2331-8422",

}

RIS

TY - JOUR

T1 - Human Action Recognition from Various Data Modalities

T2 - A Review

AU - Sun, Zehua

AU - Liu, Jun

AU - Ke, Qiuhong

AU - Rahmani, Hossein

AU - Bennamoun, Mohammed

AU - Wang, Gang

PY - 2020/12/22

Y1 - 2020/12/22

N2 - Human Action Recognition (HAR), aiming to understand human behaviors and then assign category labels, has a wide range of applications, and thus has been attracting increasing attention in the field of computer vision. Generally, human actions can be represented using various data modalities, such as RGB, skeleton, depth, infrared sequence, point cloud, event stream, audio, acceleration, radar, and WiFi, etc., which encode different sources of useful yet distinct information and have various advantages and application scenarios. Consequently, lots of existing works have attempted to investigate different types of approaches for HAR using various modalities. In this paper, we give a comprehensive survey for HAR from the perspective of the input data modalities. Specifically, we review both the hand-crafted feature-based and deep learning-based methods for single data modalities, and also review the methods based on multiple modalities, including the fusion-based frameworks and the co-learning-based approaches. The current benchmark datasets for HAR are also introduced. Finally, we discuss some potentially important research directions in this area.

AB - Human Action Recognition (HAR), aiming to understand human behaviors and then assign category labels, has a wide range of applications, and thus has been attracting increasing attention in the field of computer vision. Generally, human actions can be represented using various data modalities, such as RGB, skeleton, depth, infrared sequence, point cloud, event stream, audio, acceleration, radar, and WiFi, etc., which encode different sources of useful yet distinct information and have various advantages and application scenarios. Consequently, lots of existing works have attempted to investigate different types of approaches for HAR using various modalities. In this paper, we give a comprehensive survey for HAR from the perspective of the input data modalities. Specifically, we review both the hand-crafted feature-based and deep learning-based methods for single data modalities, and also review the methods based on multiple modalities, including the fusion-based frameworks and the co-learning-based approaches. The current benchmark datasets for HAR are also introduced. Finally, we discuss some potentially important research directions in this area.

KW - cs.CV

M3 - Journal article

JO - arXiv

JF - arXiv

SN - 2331-8422

ER -