Home > Research > Browse

Results for Visual attention

Publications & Outputs

  1. Real-time head-based deep-learning model for gaze probability regions in collaborative VR

    Bovo, R., Giunchi, D., Sidenmark, L., Costanza, E., Gellersen, H. & Heinis, T., 11/06/2022, ACM Symposium on Eye Tracking Research and Applications. New York: ACM, 8 p. 6

    Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

  2. The multimodal nature of spoken word processing in the visual world: testing the predictions of alternative models of multimodal integration

    Smith, A. C., Monaghan, P. & Huettig, F., 04/2017, In: Journal of Memory and Language. 93, p. 276-303 28 p.

    Research output: Contribution to Journal/MagazineJournal articlepeer-review

  3. Literacy effects on language and vision: emergent effects from an amodal shared resource (ASR) computational model

    Smith, A. C., Monaghan, P. & Huettig, F., 12/2014, In: Cognitive Psychology. 75, p. 28-54 27 p.

    Research output: Contribution to Journal/MagazineJournal articlepeer-review

  4. Spatial language, visual attention, and perceptual simulation

    Coventry, K. R., Lynott, D., Cangelosi, A., Monrouxe, L., Joyce, D. & Richardson, D. C., 03/2010, In: Brain and Language. 112, 3, p. 202-213 12 p.

    Research output: Contribution to Journal/MagazineJournal articlepeer-review