Home > Research > Publications & Outputs > Learning action recognition model from depth an...

Associated organisational unit

Electronic data

  • RahmaniandBennamoun_ICCV2017

    Accepted author manuscript, 3.95 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Learning action recognition model from depth and skeleton videos

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNChapter

Published

Standard

Learning action recognition model from depth and skeleton videos. / Rahmani, Hossein; Bennamoun, Mohammed.

Proceedings of the IEEE International Conference on Computer Vision. Institute of Electrical and Electronics Engineers Inc., 2017. p. 5833-5842 (Proceedings of the IEEE International Conference on Computer Vision; Vol. 2017-October).

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNChapter

Harvard

Rahmani, H & Bennamoun, M 2017, Learning action recognition model from depth and skeleton videos. in Proceedings of the IEEE International Conference on Computer Vision. Proceedings of the IEEE International Conference on Computer Vision, vol. 2017-October, Institute of Electrical and Electronics Engineers Inc., pp. 5833-5842. https://doi.org/10.1109/ICCV.2017.621

APA

Rahmani, H., & Bennamoun, M. (2017). Learning action recognition model from depth and skeleton videos. In Proceedings of the IEEE International Conference on Computer Vision (pp. 5833-5842). (Proceedings of the IEEE International Conference on Computer Vision; Vol. 2017-October). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICCV.2017.621

Vancouver

Rahmani H, Bennamoun M. Learning action recognition model from depth and skeleton videos. In Proceedings of the IEEE International Conference on Computer Vision. Institute of Electrical and Electronics Engineers Inc. 2017. p. 5833-5842. (Proceedings of the IEEE International Conference on Computer Vision). doi: 10.1109/ICCV.2017.621

Author

Rahmani, Hossein ; Bennamoun, Mohammed. / Learning action recognition model from depth and skeleton videos. Proceedings of the IEEE International Conference on Computer Vision. Institute of Electrical and Electronics Engineers Inc., 2017. pp. 5833-5842 (Proceedings of the IEEE International Conference on Computer Vision).

Bibtex

@inbook{d795c6d624c249988c426fc117c367a3,
title = "Learning action recognition model from depth and skeleton videos",
abstract = "Depth sensors open up possibilities of dealing with the human action recognition problem by providing 3D human skeleton data and depth images of the scene. Analysis of hu- man actions based on 3D skeleton data has become popular recently, due to its robustness and view-invariant represen- tation. However, the skeleton alone is insufficient to distin- guish actions which involve human-object interactions. In this paper, we propose a deep model which efficiently mod- els human-object interactions and intra-class variations un- der viewpoint changes. First, a human body-part model is introduced to transfer the depth appearances of body-parts to a shared view-invariant space. Second, an end-to-end learning framework is proposed which is able to effectively combine the view-invariant body-part representation from skeletal and depth images, and learn the relations between the human body-parts and the environmental objects, the interactions between different human body-parts, and the temporal structure of human actions. We have evaluated the performance of our proposed model against 15 existing techniques on two large benchmark human action recogni- tion datasets including NTU RGB+D and UWA3DII. The Experimental results show that our technique provides a significant improvement over state-of-the-art methods. 1.",
author = "Hossein Rahmani and Mohammed Bennamoun",
year = "2017",
month = dec,
day = "22",
doi = "10.1109/ICCV.2017.621",
language = "English",
isbn = "9781538610329",
series = "Proceedings of the IEEE International Conference on Computer Vision",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "5833--5842",
booktitle = "Proceedings of the IEEE International Conference on Computer Vision",
address = "United States",

}

RIS

TY - CHAP

T1 - Learning action recognition model from depth and skeleton videos

AU - Rahmani, Hossein

AU - Bennamoun, Mohammed

PY - 2017/12/22

Y1 - 2017/12/22

N2 - Depth sensors open up possibilities of dealing with the human action recognition problem by providing 3D human skeleton data and depth images of the scene. Analysis of hu- man actions based on 3D skeleton data has become popular recently, due to its robustness and view-invariant represen- tation. However, the skeleton alone is insufficient to distin- guish actions which involve human-object interactions. In this paper, we propose a deep model which efficiently mod- els human-object interactions and intra-class variations un- der viewpoint changes. First, a human body-part model is introduced to transfer the depth appearances of body-parts to a shared view-invariant space. Second, an end-to-end learning framework is proposed which is able to effectively combine the view-invariant body-part representation from skeletal and depth images, and learn the relations between the human body-parts and the environmental objects, the interactions between different human body-parts, and the temporal structure of human actions. We have evaluated the performance of our proposed model against 15 existing techniques on two large benchmark human action recogni- tion datasets including NTU RGB+D and UWA3DII. The Experimental results show that our technique provides a significant improvement over state-of-the-art methods. 1.

AB - Depth sensors open up possibilities of dealing with the human action recognition problem by providing 3D human skeleton data and depth images of the scene. Analysis of hu- man actions based on 3D skeleton data has become popular recently, due to its robustness and view-invariant represen- tation. However, the skeleton alone is insufficient to distin- guish actions which involve human-object interactions. In this paper, we propose a deep model which efficiently mod- els human-object interactions and intra-class variations un- der viewpoint changes. First, a human body-part model is introduced to transfer the depth appearances of body-parts to a shared view-invariant space. Second, an end-to-end learning framework is proposed which is able to effectively combine the view-invariant body-part representation from skeletal and depth images, and learn the relations between the human body-parts and the environmental objects, the interactions between different human body-parts, and the temporal structure of human actions. We have evaluated the performance of our proposed model against 15 existing techniques on two large benchmark human action recogni- tion datasets including NTU RGB+D and UWA3DII. The Experimental results show that our technique provides a significant improvement over state-of-the-art methods. 1.

U2 - 10.1109/ICCV.2017.621

DO - 10.1109/ICCV.2017.621

M3 - Chapter

SN - 9781538610329

T3 - Proceedings of the IEEE International Conference on Computer Vision

SP - 5833

EP - 5842

BT - Proceedings of the IEEE International Conference on Computer Vision

PB - Institute of Electrical and Electronics Engineers Inc.

ER -