Home > Research > Publications & Outputs > GAVIN

Links

Text available via DOI:

View graph of relations

GAVIN: Gaze-Assisted Voice-Based Implicit Note-taking

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

GAVIN: Gaze-Assisted Voice-Based Implicit Note-taking. / Khan, A.A.; Newn, J.; Kelly, R.M. et al.
In: ACM Transactions on Computer-Human Interaction, Vol. 28, No. 4, 26, 31.08.2021.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Khan, AA, Newn, J, Kelly, RM, Srivastava, N, Bailey, J & Velloso, E 2021, 'GAVIN: Gaze-Assisted Voice-Based Implicit Note-taking', ACM Transactions on Computer-Human Interaction, vol. 28, no. 4, 26. https://doi.org/10.1145/3453988

APA

Khan, A. A., Newn, J., Kelly, R. M., Srivastava, N., Bailey, J., & Velloso, E. (2021). GAVIN: Gaze-Assisted Voice-Based Implicit Note-taking. ACM Transactions on Computer-Human Interaction, 28(4), Article 26. https://doi.org/10.1145/3453988

Vancouver

Khan AA, Newn J, Kelly RM, Srivastava N, Bailey J, Velloso E. GAVIN: Gaze-Assisted Voice-Based Implicit Note-taking. ACM Transactions on Computer-Human Interaction. 2021 Aug 31;28(4):26. Epub 2021 Aug 11. doi: 10.1145/3453988

Author

Khan, A.A. ; Newn, J. ; Kelly, R.M. et al. / GAVIN : Gaze-Assisted Voice-Based Implicit Note-taking. In: ACM Transactions on Computer-Human Interaction. 2021 ; Vol. 28, No. 4.

Bibtex

@article{9d94d81c3e87414fb6df8ececa465067,
title = "GAVIN: Gaze-Assisted Voice-Based Implicit Note-taking",
abstract = "Annotation is an effective reading strategy people often undertake while interacting with digital text. It involves highlighting pieces of text and making notes about them. Annotating while reading in a desktop environment is considered trivial but, in a mobile setting where people read while hand-holding devices, the task of highlighting and typing notes on a mobile display is challenging. In this article, we introduce GAVIN, a gaze-assisted voice note-taking application, which enables readers to seamlessly take voice notes on digital documents by implicitly anchoring them to text passages. We first conducted a contextual enquiry focusing on participants{\textquoteright} note-taking practices on digital documents. Using these findings, we propose a method which leverages eye-tracking and machine learning techniques to annotate voice notes with reference text passages. To evaluate our approach, we recruited 32 participants performing voice note-taking. Following, we trained a classifier on the data collected to predict text passage where participants made voice notes. Lastly, we employed the classifier to built GAVIN and conducted a user study to demonstrate the feasibility of the system. This research demonstrates the feasibility of using gaze as a resource for implicit anchoring of voice notes, enabling the design of systems that allow users to record voice notes with minimal effort and high accuracy.",
author = "A.A. Khan and J. Newn and R.M. Kelly and Namrata Srivastava and James Bailey and Eduardo Velloso",
year = "2021",
month = aug,
day = "31",
doi = "10.1145/3453988",
language = "English",
volume = "28",
journal = "ACM Transactions on Computer-Human Interaction",
issn = "1073-0516",
publisher = "Association for Computing Machinery (ACM)",
number = "4",

}

RIS

TY - JOUR

T1 - GAVIN

T2 - Gaze-Assisted Voice-Based Implicit Note-taking

AU - Khan, A.A.

AU - Newn, J.

AU - Kelly, R.M.

AU - Srivastava, Namrata

AU - Bailey, James

AU - Velloso, Eduardo

PY - 2021/8/31

Y1 - 2021/8/31

N2 - Annotation is an effective reading strategy people often undertake while interacting with digital text. It involves highlighting pieces of text and making notes about them. Annotating while reading in a desktop environment is considered trivial but, in a mobile setting where people read while hand-holding devices, the task of highlighting and typing notes on a mobile display is challenging. In this article, we introduce GAVIN, a gaze-assisted voice note-taking application, which enables readers to seamlessly take voice notes on digital documents by implicitly anchoring them to text passages. We first conducted a contextual enquiry focusing on participants’ note-taking practices on digital documents. Using these findings, we propose a method which leverages eye-tracking and machine learning techniques to annotate voice notes with reference text passages. To evaluate our approach, we recruited 32 participants performing voice note-taking. Following, we trained a classifier on the data collected to predict text passage where participants made voice notes. Lastly, we employed the classifier to built GAVIN and conducted a user study to demonstrate the feasibility of the system. This research demonstrates the feasibility of using gaze as a resource for implicit anchoring of voice notes, enabling the design of systems that allow users to record voice notes with minimal effort and high accuracy.

AB - Annotation is an effective reading strategy people often undertake while interacting with digital text. It involves highlighting pieces of text and making notes about them. Annotating while reading in a desktop environment is considered trivial but, in a mobile setting where people read while hand-holding devices, the task of highlighting and typing notes on a mobile display is challenging. In this article, we introduce GAVIN, a gaze-assisted voice note-taking application, which enables readers to seamlessly take voice notes on digital documents by implicitly anchoring them to text passages. We first conducted a contextual enquiry focusing on participants’ note-taking practices on digital documents. Using these findings, we propose a method which leverages eye-tracking and machine learning techniques to annotate voice notes with reference text passages. To evaluate our approach, we recruited 32 participants performing voice note-taking. Following, we trained a classifier on the data collected to predict text passage where participants made voice notes. Lastly, we employed the classifier to built GAVIN and conducted a user study to demonstrate the feasibility of the system. This research demonstrates the feasibility of using gaze as a resource for implicit anchoring of voice notes, enabling the design of systems that allow users to record voice notes with minimal effort and high accuracy.

U2 - 10.1145/3453988

DO - 10.1145/3453988

M3 - Journal article

VL - 28

JO - ACM Transactions on Computer-Human Interaction

JF - ACM Transactions on Computer-Human Interaction

SN - 1073-0516

IS - 4

M1 - 26

ER -