Home > Research > Publications & Outputs > Combining Low and Mid-Level Gaze Features for D...

Links

Text available via DOI:

View graph of relations

Combining Low and Mid-Level Gaze Features for Desktop Activity Recognition

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
Close
Article number189
<mark>Journal publication date</mark>31/12/2018
<mark>Journal</mark>Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Number of pages27
Pages (from-to)1-27
Publication StatusPublished
Early online date27/12/18
<mark>Original language</mark>English

Abstract

Human activity recognition (HAR) is an important research area due to its potential for building context-aware interactive systems. Though movement-based activity recognition is an established area of research, recognising sedentary activities remains an open research question. Previous works have explored eye-based activity recognition as a potential approach for this challenge, focusing on statistical measures derived from eye movement properties---low-level gaze features---or some knowledge of the Areas-of-Interest (AOI) of the stimulus---high-level gaze features. In this paper, we extend this body of work by employing the addition of mid-level gaze features; features that add a level of abstraction over low-level features with some knowledge of the activity, but not of the stimulus. We evaluated our approach on a dataset collected from 24 participants performing eight desktop computing activities. We trained a classifier extending 26 low-level features derived from existing literature with the addition of 24 novel candidate mid-level gaze features. Our results show an overall classification performance of 0.72 (F1-Score), with up to 4% increase in accuracy when adding our mid-level gaze features. Finally, we discuss the implications of combining low- and mid-level gaze features, as well as the future directions for eye-based activity recognition.