Home > Research > Publications & Outputs > Combining Low and Mid-Level Gaze Features for D...

Links

Text available via DOI:

View graph of relations

Combining Low and Mid-Level Gaze Features for Desktop Activity Recognition

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Combining Low and Mid-Level Gaze Features for Desktop Activity Recognition. / Srivastava, Namrata; Newn, Joshua; Velloso, Eduardo.
In: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 31.12.2018, p. 1-27.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Srivastava, N, Newn, J & Velloso, E 2018, 'Combining Low and Mid-Level Gaze Features for Desktop Activity Recognition', Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, pp. 1-27. https://doi.org/10.1145/3287067

APA

Srivastava, N., Newn, J., & Velloso, E. (2018). Combining Low and Mid-Level Gaze Features for Desktop Activity Recognition. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 1-27. Article 189. https://doi.org/10.1145/3287067

Vancouver

Srivastava N, Newn J, Velloso E. Combining Low and Mid-Level Gaze Features for Desktop Activity Recognition. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies. 2018 Dec 31;1-27. 189. Epub 2018 Dec 27. doi: 10.1145/3287067

Author

Srivastava, Namrata ; Newn, Joshua ; Velloso, Eduardo. / Combining Low and Mid-Level Gaze Features for Desktop Activity Recognition. In: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies. 2018 ; pp. 1-27.

Bibtex

@article{63439494bce04dd7a68c290a081faef1,
title = "Combining Low and Mid-Level Gaze Features for Desktop Activity Recognition",
abstract = "Human activity recognition (HAR) is an important research area due to its potential for building context-aware interactive systems. Though movement-based activity recognition is an established area of research, recognising sedentary activities remains an open research question. Previous works have explored eye-based activity recognition as a potential approach for this challenge, focusing on statistical measures derived from eye movement properties---low-level gaze features---or some knowledge of the Areas-of-Interest (AOI) of the stimulus---high-level gaze features. In this paper, we extend this body of work by employing the addition of mid-level gaze features; features that add a level of abstraction over low-level features with some knowledge of the activity, but not of the stimulus. We evaluated our approach on a dataset collected from 24 participants performing eight desktop computing activities. We trained a classifier extending 26 low-level features derived from existing literature with the addition of 24 novel candidate mid-level gaze features. Our results show an overall classification performance of 0.72 (F1-Score), with up to 4% increase in accuracy when adding our mid-level gaze features. Finally, we discuss the implications of combining low- and mid-level gaze features, as well as the future directions for eye-based activity recognition.",
author = "Namrata Srivastava and Joshua Newn and Eduardo Velloso",
year = "2018",
month = dec,
day = "31",
doi = "10.1145/3287067",
language = "English",
pages = "1--27",
journal = "Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies",
issn = "2474-9567",
publisher = "Association for Computing Machinery (ACM)",

}

RIS

TY - JOUR

T1 - Combining Low and Mid-Level Gaze Features for Desktop Activity Recognition

AU - Srivastava, Namrata

AU - Newn, Joshua

AU - Velloso, Eduardo

PY - 2018/12/31

Y1 - 2018/12/31

N2 - Human activity recognition (HAR) is an important research area due to its potential for building context-aware interactive systems. Though movement-based activity recognition is an established area of research, recognising sedentary activities remains an open research question. Previous works have explored eye-based activity recognition as a potential approach for this challenge, focusing on statistical measures derived from eye movement properties---low-level gaze features---or some knowledge of the Areas-of-Interest (AOI) of the stimulus---high-level gaze features. In this paper, we extend this body of work by employing the addition of mid-level gaze features; features that add a level of abstraction over low-level features with some knowledge of the activity, but not of the stimulus. We evaluated our approach on a dataset collected from 24 participants performing eight desktop computing activities. We trained a classifier extending 26 low-level features derived from existing literature with the addition of 24 novel candidate mid-level gaze features. Our results show an overall classification performance of 0.72 (F1-Score), with up to 4% increase in accuracy when adding our mid-level gaze features. Finally, we discuss the implications of combining low- and mid-level gaze features, as well as the future directions for eye-based activity recognition.

AB - Human activity recognition (HAR) is an important research area due to its potential for building context-aware interactive systems. Though movement-based activity recognition is an established area of research, recognising sedentary activities remains an open research question. Previous works have explored eye-based activity recognition as a potential approach for this challenge, focusing on statistical measures derived from eye movement properties---low-level gaze features---or some knowledge of the Areas-of-Interest (AOI) of the stimulus---high-level gaze features. In this paper, we extend this body of work by employing the addition of mid-level gaze features; features that add a level of abstraction over low-level features with some knowledge of the activity, but not of the stimulus. We evaluated our approach on a dataset collected from 24 participants performing eight desktop computing activities. We trained a classifier extending 26 low-level features derived from existing literature with the addition of 24 novel candidate mid-level gaze features. Our results show an overall classification performance of 0.72 (F1-Score), with up to 4% increase in accuracy when adding our mid-level gaze features. Finally, we discuss the implications of combining low- and mid-level gaze features, as well as the future directions for eye-based activity recognition.

U2 - 10.1145/3287067

DO - 10.1145/3287067

M3 - Journal article

SP - 1

EP - 27

JO - Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies

JF - Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies

SN - 2474-9567

M1 - 189

ER -