Home > Research > Publications & Outputs > Performance metrics for activity recognition
View graph of relations

Performance metrics for activity recognition

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Performance metrics for activity recognition. / Ward, Jamie; Lukowicz, Paul; Gellersen, Hans.
In: ACM Transactions on Intelligent Systems and Technology, Vol. 2, No. 1, 6, 2011.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Ward, J, Lukowicz, P & Gellersen, H 2011, 'Performance metrics for activity recognition', ACM Transactions on Intelligent Systems and Technology, vol. 2, no. 1, 6. https://doi.org/10.1145/1889681.1889687

APA

Ward, J., Lukowicz, P., & Gellersen, H. (2011). Performance metrics for activity recognition. ACM Transactions on Intelligent Systems and Technology, 2(1), Article 6. https://doi.org/10.1145/1889681.1889687

Vancouver

Ward J, Lukowicz P, Gellersen H. Performance metrics for activity recognition. ACM Transactions on Intelligent Systems and Technology. 2011;2(1):6. doi: 10.1145/1889681.1889687

Author

Ward, Jamie ; Lukowicz, Paul ; Gellersen, Hans. / Performance metrics for activity recognition. In: ACM Transactions on Intelligent Systems and Technology. 2011 ; Vol. 2, No. 1.

Bibtex

@article{038a73f8cf274b16b6de44326c62f3cd,
title = "Performance metrics for activity recognition",
abstract = "In this article, we introduce and evaluate a comprehensive set of performance metrics and visualisations for continuous activity recognition (AR). We demonstrate how standard evaluation methods, often borrowed from related pattern recognition problems, fail to capture common artefacts found in continuous AR—specifically event fragmentation, event merging and timing offsets. We support our assertion with an analysis on a set of recently published AR papers. Building on an earlier initial work on the topic, we develop a frame-based visualisation and corresponding set of class-skew invariant metrics for the one class versus all evaluation. These are complemented by a new complete set of event-based metrics that allow a quick graphical representation of system performance—showing events that are correct, inserted, deleted, fragmented, merged and those which are both fragmented and merged. We evaluate the utility of our approach through comparison with standard metrics on data from three different published experiments. This shows that where event- and frame-based precision and recall lead to an ambiguous interpretation of results in some cases, the proposed metrics provide a consistently unambiguous explanation.",
keywords = "Activity recognition, Metrics, Performance evaluation",
author = "Jamie Ward and Paul Lukowicz and Hans Gellersen",
year = "2011",
doi = "10.1145/1889681.1889687",
language = "English",
volume = "2",
journal = "ACM Transactions on Intelligent Systems and Technology",
issn = "2157-6904",
publisher = "Association for Computing Machinery, Inc",
number = "1",

}

RIS

TY - JOUR

T1 - Performance metrics for activity recognition

AU - Ward, Jamie

AU - Lukowicz, Paul

AU - Gellersen, Hans

PY - 2011

Y1 - 2011

N2 - In this article, we introduce and evaluate a comprehensive set of performance metrics and visualisations for continuous activity recognition (AR). We demonstrate how standard evaluation methods, often borrowed from related pattern recognition problems, fail to capture common artefacts found in continuous AR—specifically event fragmentation, event merging and timing offsets. We support our assertion with an analysis on a set of recently published AR papers. Building on an earlier initial work on the topic, we develop a frame-based visualisation and corresponding set of class-skew invariant metrics for the one class versus all evaluation. These are complemented by a new complete set of event-based metrics that allow a quick graphical representation of system performance—showing events that are correct, inserted, deleted, fragmented, merged and those which are both fragmented and merged. We evaluate the utility of our approach through comparison with standard metrics on data from three different published experiments. This shows that where event- and frame-based precision and recall lead to an ambiguous interpretation of results in some cases, the proposed metrics provide a consistently unambiguous explanation.

AB - In this article, we introduce and evaluate a comprehensive set of performance metrics and visualisations for continuous activity recognition (AR). We demonstrate how standard evaluation methods, often borrowed from related pattern recognition problems, fail to capture common artefacts found in continuous AR—specifically event fragmentation, event merging and timing offsets. We support our assertion with an analysis on a set of recently published AR papers. Building on an earlier initial work on the topic, we develop a frame-based visualisation and corresponding set of class-skew invariant metrics for the one class versus all evaluation. These are complemented by a new complete set of event-based metrics that allow a quick graphical representation of system performance—showing events that are correct, inserted, deleted, fragmented, merged and those which are both fragmented and merged. We evaluate the utility of our approach through comparison with standard metrics on data from three different published experiments. This shows that where event- and frame-based precision and recall lead to an ambiguous interpretation of results in some cases, the proposed metrics provide a consistently unambiguous explanation.

KW - Activity recognition

KW - Metrics

KW - Performance evaluation

U2 - 10.1145/1889681.1889687

DO - 10.1145/1889681.1889687

M3 - Journal article

VL - 2

JO - ACM Transactions on Intelligent Systems and Technology

JF - ACM Transactions on Intelligent Systems and Technology

SN - 2157-6904

IS - 1

M1 - 6

ER -