Home > Research > Publications & Outputs > Towards a Gaze-Informed Movement Intention Mode...

Links

Text available via DOI:

View graph of relations

Towards a Gaze-Informed Movement Intention Model for Robot-Assisted Upper-Limb Rehabilitation

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Close
Publication date9/12/2021
Host publication2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC)
PublisherIEEE
Number of pages4
ISBN (electronic)9781728111797
ISBN (print)9781728111803
<mark>Original language</mark>English

Publication series

NameProceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS
ISSN (Print)1557-170X

Abstract

Gaze-based intention detection has been explored for robotic-assisted neuro-rehabilitation in recent years. As eye movements often precede hand movements, robotic devices can use gaze information to augment the detection of movement intention in upper-limb rehabilitation. However, due to the likely practical drawbacks of using head-mounted eye trackers and the limited generalisability of the algorithms, gaze-informed approaches have not yet been used in clinical practice.This paper introduces a preliminary model for a gazeinformed movement intention that separates the intention spatial component obtained from the gaze from the time component obtained from movement. We leverage the latter to isolate the relevant gaze information happening just before the movement initiation. We evaluated our approach with six healthy individuals using an experimental setup that employed a screen-mounted eye-tracker. The results showed a prediction accuracy of 60% and 73% for an arbitrary target choice and an imposed target choice, respectively.From these findings, we expect that the model could 1) generalise better to individuals with movement impairment (by not considering movement direction), 2) allow a generalisation to more complex, multi-stage actions including several submovements, and 3) facilitate a more natural human-robot interactions and empower patients with the agency to decide movement onset. Overall, the paper demonstrates the potential for using gaze-movement model and the use of screen-based eye trackers for robot-assisted upper-limb rehabilitation.