Home > Research > Publications & Outputs > Skeleton-Based Human Action Recognition with Gl...

Links

Text available via DOI:

View graph of relations

Skeleton-Based Human Action Recognition with Global Context-Aware Attention LSTM Networks

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Skeleton-Based Human Action Recognition with Global Context-Aware Attention LSTM Networks. / Liu, Jun; Wang, G.; Duan, L.-Y. et al.
In: IEEE Transactions on Image Processing, Vol. 27, No. 4, 30.04.2018, p. 1586-1599.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Liu, J, Wang, G, Duan, L-Y, Abdiyeva, K & Kot, AC 2018, 'Skeleton-Based Human Action Recognition with Global Context-Aware Attention LSTM Networks', IEEE Transactions on Image Processing, vol. 27, no. 4, pp. 1586-1599. https://doi.org/10.1109/TIP.2017.2785279

APA

Liu, J., Wang, G., Duan, L.-Y., Abdiyeva, K., & Kot, A. C. (2018). Skeleton-Based Human Action Recognition with Global Context-Aware Attention LSTM Networks. IEEE Transactions on Image Processing, 27(4), 1586-1599. https://doi.org/10.1109/TIP.2017.2785279

Vancouver

Liu J, Wang G, Duan LY, Abdiyeva K, Kot AC. Skeleton-Based Human Action Recognition with Global Context-Aware Attention LSTM Networks. IEEE Transactions on Image Processing. 2018 Apr 30;27(4):1586-1599. Epub 2017 Dec 19. doi: 10.1109/TIP.2017.2785279

Author

Liu, Jun ; Wang, G. ; Duan, L.-Y. et al. / Skeleton-Based Human Action Recognition with Global Context-Aware Attention LSTM Networks. In: IEEE Transactions on Image Processing. 2018 ; Vol. 27, No. 4. pp. 1586-1599.

Bibtex

@article{30d0d7b11a1d4c79a7c5024215c00f98,
title = "Skeleton-Based Human Action Recognition with Global Context-Aware Attention LSTM Networks",
abstract = "Human action recognition in 3D skeleton sequences has attracted a lot of research attention. Recently, long short-term memory (LSTM) networks have shown promising performance in this task due to their strengths in modeling the dependencies and dynamics in sequential data. As not all skeletal joints are informative for action recognition, and the irrelevant joints often bring noise which can degrade the performance, we need to pay more attention to the informative ones. However, the original LSTM network does not have explicit attention ability. In this paper, we propose a new class of LSTM network, global context-aware attention LSTM, for skeleton-based action recognition, which is capable of selectively focusing on the informative joints in each frame by using a global context memory cell. To further improve the attention capability, we also introduce a recurrent attention mechanism, with which the attention performance of our network can be enhanced progressively. Besides, a two-stream framework, which leverages coarse-grained attention and fine-grained attention, is also introduced. The proposed method achieves state-of-the-art performance on five challenging datasets for skeleton-based action recognition.",
author = "Jun Liu and G. Wang and L.-Y. Duan and K. Abdiyeva and A.C. Kot",
year = "2018",
month = apr,
day = "30",
doi = "10.1109/TIP.2017.2785279",
language = "English",
volume = "27",
pages = "1586--1599",
journal = "IEEE Transactions on Image Processing",
issn = "1057-7149",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "4",

}

RIS

TY - JOUR

T1 - Skeleton-Based Human Action Recognition with Global Context-Aware Attention LSTM Networks

AU - Liu, Jun

AU - Wang, G.

AU - Duan, L.-Y.

AU - Abdiyeva, K.

AU - Kot, A.C.

PY - 2018/4/30

Y1 - 2018/4/30

N2 - Human action recognition in 3D skeleton sequences has attracted a lot of research attention. Recently, long short-term memory (LSTM) networks have shown promising performance in this task due to their strengths in modeling the dependencies and dynamics in sequential data. As not all skeletal joints are informative for action recognition, and the irrelevant joints often bring noise which can degrade the performance, we need to pay more attention to the informative ones. However, the original LSTM network does not have explicit attention ability. In this paper, we propose a new class of LSTM network, global context-aware attention LSTM, for skeleton-based action recognition, which is capable of selectively focusing on the informative joints in each frame by using a global context memory cell. To further improve the attention capability, we also introduce a recurrent attention mechanism, with which the attention performance of our network can be enhanced progressively. Besides, a two-stream framework, which leverages coarse-grained attention and fine-grained attention, is also introduced. The proposed method achieves state-of-the-art performance on five challenging datasets for skeleton-based action recognition.

AB - Human action recognition in 3D skeleton sequences has attracted a lot of research attention. Recently, long short-term memory (LSTM) networks have shown promising performance in this task due to their strengths in modeling the dependencies and dynamics in sequential data. As not all skeletal joints are informative for action recognition, and the irrelevant joints often bring noise which can degrade the performance, we need to pay more attention to the informative ones. However, the original LSTM network does not have explicit attention ability. In this paper, we propose a new class of LSTM network, global context-aware attention LSTM, for skeleton-based action recognition, which is capable of selectively focusing on the informative joints in each frame by using a global context memory cell. To further improve the attention capability, we also introduce a recurrent attention mechanism, with which the attention performance of our network can be enhanced progressively. Besides, a two-stream framework, which leverages coarse-grained attention and fine-grained attention, is also introduced. The proposed method achieves state-of-the-art performance on five challenging datasets for skeleton-based action recognition.

U2 - 10.1109/TIP.2017.2785279

DO - 10.1109/TIP.2017.2785279

M3 - Journal article

VL - 27

SP - 1586

EP - 1599

JO - IEEE Transactions on Image Processing

JF - IEEE Transactions on Image Processing

SN - 1057-7149

IS - 4

ER -