Home > Research > Publications & Outputs > Arbitrary View Action Recognition via Transfer ...

Electronic data

  • Arbitrary Action_TIP

    Rights statement: ©2018 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Accepted author manuscript, 8.07 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Arbitrary View Action Recognition via Transfer Dictionary Learning on Synthetic Training Data

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Arbitrary View Action Recognition via Transfer Dictionary Learning on Synthetic Training Data. / Zhang, Jingtian; Shum, Hubert; Han, Jungong et al.
In: IEEE Transactions on Image Processing, Vol. 27, No. 10, 10.2018, p. 4709-4723.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Zhang, J, Shum, H, Han, J & Shao, L 2018, 'Arbitrary View Action Recognition via Transfer Dictionary Learning on Synthetic Training Data', IEEE Transactions on Image Processing, vol. 27, no. 10, pp. 4709-4723. https://doi.org/10.1109/TIP.2018.2836323

APA

Zhang, J., Shum, H., Han, J., & Shao, L. (2018). Arbitrary View Action Recognition via Transfer Dictionary Learning on Synthetic Training Data. IEEE Transactions on Image Processing, 27(10), 4709-4723. https://doi.org/10.1109/TIP.2018.2836323

Vancouver

Zhang J, Shum H, Han J, Shao L. Arbitrary View Action Recognition via Transfer Dictionary Learning on Synthetic Training Data. IEEE Transactions on Image Processing. 2018 Oct;27(10):4709-4723. Epub 2018 May 15. doi: 10.1109/TIP.2018.2836323

Author

Zhang, Jingtian ; Shum, Hubert ; Han, Jungong et al. / Arbitrary View Action Recognition via Transfer Dictionary Learning on Synthetic Training Data. In: IEEE Transactions on Image Processing. 2018 ; Vol. 27, No. 10. pp. 4709-4723.

Bibtex

@article{6fc5784aced24aa88a7ed925ff0ece8c,
title = "Arbitrary View Action Recognition via Transfer Dictionary Learning on Synthetic Training Data",
abstract = "Human action recognition is crucial to many practical applications, ranging from human-computer interaction to video surveillance. Most approaches either recognize the human action from a fixed view or require the knowledge of view angle, which is usually not available in practical applications. In this paper, we propose a novel end-to-end framework to jointly learn a view-invariance transfer dictionary and a view-invariant classifier. The result of the process is a dictionary that can projectreal-world 2D video into a view-invariant sparse representation, as well as a classifier to recognize actions with an arbitrary view.The main feature of our algorithm is the use of synthetic data to extract view-invariance between 3D and 2D videos during the pre-training phase. This guarantees the availability of trainingdata, and removes the hassle of obtaining real-world videos in specific viewing angles. Additionally, for better describing the actions in 3D videos, we introduce a new feature set called the 3D dense trajectories to effectively encode extracted trajectory information on 3D videos. Experimental results on the IXMAS, N-UCLA, i3DPost and UWA3DII datasets show improvementsover existing algorithms.",
author = "Jingtian Zhang and Hubert Shum and Jungong Han and Ling Shao",
note = "{\textcopyright}2018 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.",
year = "2018",
month = oct,
doi = "10.1109/TIP.2018.2836323",
language = "English",
volume = "27",
pages = "4709--4723",
journal = "IEEE Transactions on Image Processing",
issn = "1057-7149",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "10",

}

RIS

TY - JOUR

T1 - Arbitrary View Action Recognition via Transfer Dictionary Learning on Synthetic Training Data

AU - Zhang, Jingtian

AU - Shum, Hubert

AU - Han, Jungong

AU - Shao, Ling

N1 - ©2018 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

PY - 2018/10

Y1 - 2018/10

N2 - Human action recognition is crucial to many practical applications, ranging from human-computer interaction to video surveillance. Most approaches either recognize the human action from a fixed view or require the knowledge of view angle, which is usually not available in practical applications. In this paper, we propose a novel end-to-end framework to jointly learn a view-invariance transfer dictionary and a view-invariant classifier. The result of the process is a dictionary that can projectreal-world 2D video into a view-invariant sparse representation, as well as a classifier to recognize actions with an arbitrary view.The main feature of our algorithm is the use of synthetic data to extract view-invariance between 3D and 2D videos during the pre-training phase. This guarantees the availability of trainingdata, and removes the hassle of obtaining real-world videos in specific viewing angles. Additionally, for better describing the actions in 3D videos, we introduce a new feature set called the 3D dense trajectories to effectively encode extracted trajectory information on 3D videos. Experimental results on the IXMAS, N-UCLA, i3DPost and UWA3DII datasets show improvementsover existing algorithms.

AB - Human action recognition is crucial to many practical applications, ranging from human-computer interaction to video surveillance. Most approaches either recognize the human action from a fixed view or require the knowledge of view angle, which is usually not available in practical applications. In this paper, we propose a novel end-to-end framework to jointly learn a view-invariance transfer dictionary and a view-invariant classifier. The result of the process is a dictionary that can projectreal-world 2D video into a view-invariant sparse representation, as well as a classifier to recognize actions with an arbitrary view.The main feature of our algorithm is the use of synthetic data to extract view-invariance between 3D and 2D videos during the pre-training phase. This guarantees the availability of trainingdata, and removes the hassle of obtaining real-world videos in specific viewing angles. Additionally, for better describing the actions in 3D videos, we introduce a new feature set called the 3D dense trajectories to effectively encode extracted trajectory information on 3D videos. Experimental results on the IXMAS, N-UCLA, i3DPost and UWA3DII datasets show improvementsover existing algorithms.

U2 - 10.1109/TIP.2018.2836323

DO - 10.1109/TIP.2018.2836323

M3 - Journal article

VL - 27

SP - 4709

EP - 4723

JO - IEEE Transactions on Image Processing

JF - IEEE Transactions on Image Processing

SN - 1057-7149

IS - 10

ER -