Home > Research > Publications & Outputs > Learning a non-linear knowledge transfer model ...

Associated organisational unit

View graph of relations

Learning a non-linear knowledge transfer model for cross-view action recognition

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Learning a non-linear knowledge transfer model for cross-view action recognition. / Rahmani, Hossein.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2015. p. 2458-2466.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Rahmani, H 2015, Learning a non-linear knowledge transfer model for cross-view action recognition. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, pp. 2458-2466. https://doi.org/10.1109/CVPR.2015.7298860

APA

Rahmani, H. (2015). Learning a non-linear knowledge transfer model for cross-view action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2458-2466). IEEE. https://doi.org/10.1109/CVPR.2015.7298860

Vancouver

Rahmani H. Learning a non-linear knowledge transfer model for cross-view action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE. 2015. p. 2458-2466 doi: 10.1109/CVPR.2015.7298860

Author

Rahmani, Hossein. / Learning a non-linear knowledge transfer model for cross-view action recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2015. pp. 2458-2466

Bibtex

@inproceedings{b9bdb7a5a2934370a11aea945dde12a9,
title = "Learning a non-linear knowledge transfer model for cross-view action recognition",
abstract = "This paper concerns action recognition from unseen and unknown views. We propose unsupervised learning of a non-linear model that transfers knowledge from multiple views to a canonical view. The proposed Non-linear Knowledge Transfer Model (NKTM) is a deep network, with weight decay and sparsity constraints, which finds a shared high-level virtual path from videos captured from different unknown viewpoints to the same canonical view. The strength of our technique is that we learn a single NKTM for all actions and all camera viewing directions. Thus, NKTM does not require action labels during learning and knowledge of the camera viewpoints during training or testing. NKTM is learned once only from dense trajectories of synthetic points fitted to mocap data and then applied to real video data. Trajectories are coded with a general codebook learned from the same mocap data. NKTM is scalable to new action classes and training data as it does not require re-learning. Experiments on the IXMAS and N-UCLA datasets show that NKTM outperforms existing state-of-the-art methods for cross-view action recognition.",
author = "Hossein Rahmani",
year = "2015",
doi = "10.1109/CVPR.2015.7298860",
language = "English",
pages = "2458--2466",
booktitle = "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
publisher = "IEEE",

}

RIS

TY - GEN

T1 - Learning a non-linear knowledge transfer model for cross-view action recognition

AU - Rahmani, Hossein

PY - 2015

Y1 - 2015

N2 - This paper concerns action recognition from unseen and unknown views. We propose unsupervised learning of a non-linear model that transfers knowledge from multiple views to a canonical view. The proposed Non-linear Knowledge Transfer Model (NKTM) is a deep network, with weight decay and sparsity constraints, which finds a shared high-level virtual path from videos captured from different unknown viewpoints to the same canonical view. The strength of our technique is that we learn a single NKTM for all actions and all camera viewing directions. Thus, NKTM does not require action labels during learning and knowledge of the camera viewpoints during training or testing. NKTM is learned once only from dense trajectories of synthetic points fitted to mocap data and then applied to real video data. Trajectories are coded with a general codebook learned from the same mocap data. NKTM is scalable to new action classes and training data as it does not require re-learning. Experiments on the IXMAS and N-UCLA datasets show that NKTM outperforms existing state-of-the-art methods for cross-view action recognition.

AB - This paper concerns action recognition from unseen and unknown views. We propose unsupervised learning of a non-linear model that transfers knowledge from multiple views to a canonical view. The proposed Non-linear Knowledge Transfer Model (NKTM) is a deep network, with weight decay and sparsity constraints, which finds a shared high-level virtual path from videos captured from different unknown viewpoints to the same canonical view. The strength of our technique is that we learn a single NKTM for all actions and all camera viewing directions. Thus, NKTM does not require action labels during learning and knowledge of the camera viewpoints during training or testing. NKTM is learned once only from dense trajectories of synthetic points fitted to mocap data and then applied to real video data. Trajectories are coded with a general codebook learned from the same mocap data. NKTM is scalable to new action classes and training data as it does not require re-learning. Experiments on the IXMAS and N-UCLA datasets show that NKTM outperforms existing state-of-the-art methods for cross-view action recognition.

U2 - 10.1109/CVPR.2015.7298860

DO - 10.1109/CVPR.2015.7298860

M3 - Conference contribution/Paper

SP - 2458

EP - 2466

BT - Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

PB - IEEE

ER -