Home > Research > Publications & Outputs > Human action recognition using deep rule-based ...

Electronic data

  • DRB_MTAP_D_19_02987_V0

    Rights statement: The final publication is available at Springer via http://dx.doi.org/10.1007/s11042-020-09381-9

    Accepted author manuscript, 1.76 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Human action recognition using deep rule-based classifier

Research output: Contribution to journalJournal articlepeer-review

Published
<mark>Journal publication date</mark>1/11/2020
<mark>Journal</mark>Multimedia Tools and Applications
Issue number41-42
Volume79
Number of pages15
Pages (from-to)30653-30667
Publication StatusPublished
Early online date17/08/20
<mark>Original language</mark>English

Abstract

In recent years, numerous techniques have been proposed for human activity recognition (HAR) from images and videos. These techniques can be divided into two major categories: handcrafted and deep learning. Deep Learning-based models have produced remarkable results for HAR. However, these models have several shortcomings, such as the requirement for a massive amount of training data, lack of transparency, offline nature, and poor interpretability of their internal parameters. In this paper, a new approach for HAR is proposed, which consists of an interpretable, self-evolving, and self-organizing set of 0-order If...THEN rules. This approach is entirely data-driven, and non-parametric; thus, prototypes are identified automatically during the training process. To demonstrate the effectiveness of the proposed method, a set of high-level features is obtained using a pre-trained deep convolution neural network model, and a recently introduced deep rule-based classifier is applied for classification. Experiments are performed on a challenging benchmark dataset UCF50; results confirmed that the proposed approach outperforms state-of-the-art methods. In addition to this, an ablation study is conducted to demonstrate the efficacy of the proposed approach by comparing the performance of our DRB classifier with four state-of-the-art classifiers. This analysis revealed that the DRB classifier could perform better than state-of-the-art classifiers, even with limited training samples.

Bibliographic note

The final publication is available at Springer via http://dx.doi.org/10.1007/s11042-020-09381-9