Home > Research > Publications & Outputs > EyeContext: Recognition of High-level Contextua...

Text available via DOI:

View graph of relations

EyeContext: Recognition of High-level Contextual Cues from Human Visual Behaviour

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

EyeContext: Recognition of High-level Contextual Cues from Human Visual Behaviour. / Bulling, Andreas; Weichel, Christian; Gellersen, Hans.
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). ACM, 2013. p. 305-308.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Bulling, A, Weichel, C & Gellersen, H 2013, EyeContext: Recognition of High-level Contextual Cues from Human Visual Behaviour. in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). ACM, pp. 305-308. https://doi.org/10.1145/2470654.2470697

APA

Bulling, A., Weichel, C., & Gellersen, H. (2013). EyeContext: Recognition of High-level Contextual Cues from Human Visual Behaviour. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13) (pp. 305-308). ACM. https://doi.org/10.1145/2470654.2470697

Vancouver

Bulling A, Weichel C, Gellersen H. EyeContext: Recognition of High-level Contextual Cues from Human Visual Behaviour. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). ACM. 2013. p. 305-308 doi: 10.1145/2470654.2470697

Author

Bulling, Andreas ; Weichel, Christian ; Gellersen, Hans. / EyeContext: Recognition of High-level Contextual Cues from Human Visual Behaviour. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). ACM, 2013. pp. 305-308

Bibtex

@inproceedings{2e694665dba24808b526698c80191b7b,
title = "EyeContext: Recognition of High-level Contextual Cues from Human Visual Behaviour",
abstract = "In this work we present EyeContext, a system to infer high-level contextual cues from human visual behaviour. We conducted a user study to record eye movements of four participants over a full day of their daily life, totalling 42.5 hours of eye movement data. Participants were asked to self-annotate four non-mutually exclusive cues: social (interacting with somebody vs. no interaction), cognitive (concentrated work vs. leisure), physical (physically active vs. not active), and spatial (inside vs. outside a building). We evaluate a proof-of-concept EyeContext system that combines encoding of eye movements into strings and a spectrum string kernel support vector machine (SVM) classifier. Our results demonstrate the large information content available in long-term human visual behaviour and opens up new venues for research on eye-based behavioural monitoring and life logging.",
author = "Andreas Bulling and Christian Weichel and Hans Gellersen",
year = "2013",
doi = "10.1145/2470654.2470697",
language = "English",
pages = "305--308",
booktitle = "Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13)",
publisher = "ACM",

}

RIS

TY - GEN

T1 - EyeContext: Recognition of High-level Contextual Cues from Human Visual Behaviour

AU - Bulling, Andreas

AU - Weichel, Christian

AU - Gellersen, Hans

PY - 2013

Y1 - 2013

N2 - In this work we present EyeContext, a system to infer high-level contextual cues from human visual behaviour. We conducted a user study to record eye movements of four participants over a full day of their daily life, totalling 42.5 hours of eye movement data. Participants were asked to self-annotate four non-mutually exclusive cues: social (interacting with somebody vs. no interaction), cognitive (concentrated work vs. leisure), physical (physically active vs. not active), and spatial (inside vs. outside a building). We evaluate a proof-of-concept EyeContext system that combines encoding of eye movements into strings and a spectrum string kernel support vector machine (SVM) classifier. Our results demonstrate the large information content available in long-term human visual behaviour and opens up new venues for research on eye-based behavioural monitoring and life logging.

AB - In this work we present EyeContext, a system to infer high-level contextual cues from human visual behaviour. We conducted a user study to record eye movements of four participants over a full day of their daily life, totalling 42.5 hours of eye movement data. Participants were asked to self-annotate four non-mutually exclusive cues: social (interacting with somebody vs. no interaction), cognitive (concentrated work vs. leisure), physical (physically active vs. not active), and spatial (inside vs. outside a building). We evaluate a proof-of-concept EyeContext system that combines encoding of eye movements into strings and a spectrum string kernel support vector machine (SVM) classifier. Our results demonstrate the large information content available in long-term human visual behaviour and opens up new venues for research on eye-based behavioural monitoring and life logging.

U2 - 10.1145/2470654.2470697

DO - 10.1145/2470654.2470697

M3 - Conference contribution/Paper

SP - 305

EP - 308

BT - Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13)

PB - ACM

ER -