Home > Research > Publications & Outputs > Learning a deep model for human action recognit...

Associated organisational unit

Electronic data

  • 1602.00828.pd

    Accepted author manuscript, 8.05 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Learning a deep model for human action recognition from novel viewpoints

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Learning a deep model for human action recognition from novel viewpoints. / Rahmani, Hossein; Mian, Ajmal; Shah, Mubarak.
In: IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 40, No. 3, 01.03.2018, p. 667-681.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Rahmani, H, Mian, A & Shah, M 2018, 'Learning a deep model for human action recognition from novel viewpoints', IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 3, pp. 667-681. https://doi.org/10.1109/TPAMI.2017.2691768

APA

Rahmani, H., Mian, A., & Shah, M. (2018). Learning a deep model for human action recognition from novel viewpoints. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(3), 667-681. https://doi.org/10.1109/TPAMI.2017.2691768

Vancouver

Rahmani H, Mian A, Shah M. Learning a deep model for human action recognition from novel viewpoints. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2018 Mar 1;40(3):667-681. doi: 10.1109/TPAMI.2017.2691768

Author

Rahmani, Hossein ; Mian, Ajmal ; Shah, Mubarak. / Learning a deep model for human action recognition from novel viewpoints. In: IEEE Transactions on Pattern Analysis and Machine Intelligence. 2018 ; Vol. 40, No. 3. pp. 667-681.

Bibtex

@article{2ac5e786e4884c6bbdfb7f7ad4244359,
title = "Learning a deep model for human action recognition from novel viewpoints",
abstract = "Recognizing human actions from unknown and unseen (novel) views is a challenging problem. We propose a Robust Non-Linear Knowledge Transfer Model (R-NKTM) for human action recognition from novel views. The proposed R-NKTM is a deep fully-connected neural network that transfers knowledge of human actions from any unknown view to a shared high-level virtual view by finding a non-linear virtual path that connects the views. The R-NKTM is learned from dense trajectories of synthetic 3D human models fitted to real motion capture data and generalizes to real videos of human actions. The strength of our technique is that we learn a single R-NKTM for all actions and all viewpoints for knowledge transfer of any real human action video without the need for re-training or fine-tuning the model. Thus, R-NKTM can efficiently scale to incorporate new action classes. R-NKTM is learned with dummy labels and does not require knowledge of the camera viewpoint at any stage. Experiments on three benchmark cross-view human action datasets show that our method outperforms existing state-of-the-art.",
author = "Hossein Rahmani and Ajmal Mian and Mubarak Shah",
note = "{\textcopyright}2018IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.",
year = "2018",
month = mar,
day = "1",
doi = "10.1109/TPAMI.2017.2691768",
language = "English",
volume = "40",
pages = "667--681",
journal = "IEEE Transactions on Pattern Analysis and Machine Intelligence",
issn = "0162-8828",
publisher = "IEEE Computer Society",
number = "3",

}

RIS

TY - JOUR

T1 - Learning a deep model for human action recognition from novel viewpoints

AU - Rahmani, Hossein

AU - Mian, Ajmal

AU - Shah, Mubarak

N1 - ©2018IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

PY - 2018/3/1

Y1 - 2018/3/1

N2 - Recognizing human actions from unknown and unseen (novel) views is a challenging problem. We propose a Robust Non-Linear Knowledge Transfer Model (R-NKTM) for human action recognition from novel views. The proposed R-NKTM is a deep fully-connected neural network that transfers knowledge of human actions from any unknown view to a shared high-level virtual view by finding a non-linear virtual path that connects the views. The R-NKTM is learned from dense trajectories of synthetic 3D human models fitted to real motion capture data and generalizes to real videos of human actions. The strength of our technique is that we learn a single R-NKTM for all actions and all viewpoints for knowledge transfer of any real human action video without the need for re-training or fine-tuning the model. Thus, R-NKTM can efficiently scale to incorporate new action classes. R-NKTM is learned with dummy labels and does not require knowledge of the camera viewpoint at any stage. Experiments on three benchmark cross-view human action datasets show that our method outperforms existing state-of-the-art.

AB - Recognizing human actions from unknown and unseen (novel) views is a challenging problem. We propose a Robust Non-Linear Knowledge Transfer Model (R-NKTM) for human action recognition from novel views. The proposed R-NKTM is a deep fully-connected neural network that transfers knowledge of human actions from any unknown view to a shared high-level virtual view by finding a non-linear virtual path that connects the views. The R-NKTM is learned from dense trajectories of synthetic 3D human models fitted to real motion capture data and generalizes to real videos of human actions. The strength of our technique is that we learn a single R-NKTM for all actions and all viewpoints for knowledge transfer of any real human action video without the need for re-training or fine-tuning the model. Thus, R-NKTM can efficiently scale to incorporate new action classes. R-NKTM is learned with dummy labels and does not require knowledge of the camera viewpoint at any stage. Experiments on three benchmark cross-view human action datasets show that our method outperforms existing state-of-the-art.

U2 - 10.1109/TPAMI.2017.2691768

DO - 10.1109/TPAMI.2017.2691768

M3 - Journal article

VL - 40

SP - 667

EP - 681

JO - IEEE Transactions on Pattern Analysis and Machine Intelligence

JF - IEEE Transactions on Pattern Analysis and Machine Intelligence

SN - 0162-8828

IS - 3

ER -