12,000

We have over 12,000 students, from over 100 countries, within one of the safest campuses in the UK

93%

93% of Lancaster students go into work or further study within six months of graduating

Home > Research > Publications & Outputs > EyeContext: Recognition of High-level Contextua...
View graph of relations

Text available via DOI:

« Back

EyeContext: Recognition of High-level Contextual Cues from Human Visual Behaviour

Research output: Contribution in Book/Report/ProceedingsPaper

Published

Publication date2013
Host publicationProceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13)
PublisherACM
Pages305-308
Original languageEnglish

Abstract

In this work we present EyeContext, a system to infer high-level contextual cues from human visual behaviour. We conducted a user study to record eye movements of four participants over a full day of their daily life, totalling 42.5 hours of eye movement data. Participants were asked to self-annotate four non-mutually exclusive cues: social (interacting with somebody vs. no interaction), cognitive (concentrated work vs. leisure), physical (physically active vs. not active), and spatial (inside vs. outside a building). We evaluate a proof-of-concept EyeContext system that combines encoding of eye movements into strings and a spectrum string kernel support vector machine (SVM) classifier. Our results demonstrate the large information content available in long-term human visual behaviour and opens up new venues for research on eye-based behavioural monitoring and life logging.

Related projects