Home > Research > Publications & Outputs > Learning Computational Models of Video Memorabi...

Associated organisational unit

Links

Text available via DOI:

View graph of relations

Learning Computational Models of Video Memorability from fMRI Brain Imaging

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Learning Computational Models of Video Memorability from fMRI Brain Imaging. / Han, Junwei; Chen, Changyuan; Shao, Ling et al.
In: IEEE Transactions on Cybernetics, Vol. 45, No. 8, 08.2015, p. 1692-1703.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Han, J, Chen, C, Shao, L, Hu, X, Han, J & Liu, T 2015, 'Learning Computational Models of Video Memorability from fMRI Brain Imaging', IEEE Transactions on Cybernetics, vol. 45, no. 8, pp. 1692-1703. https://doi.org/10.1109/TCYB.2014.2358647

APA

Han, J., Chen, C., Shao, L., Hu, X., Han, J., & Liu, T. (2015). Learning Computational Models of Video Memorability from fMRI Brain Imaging. IEEE Transactions on Cybernetics, 45(8), 1692-1703. https://doi.org/10.1109/TCYB.2014.2358647

Vancouver

Han J, Chen C, Shao L, Hu X, Han J, Liu T. Learning Computational Models of Video Memorability from fMRI Brain Imaging. IEEE Transactions on Cybernetics. 2015 Aug;45(8):1692-1703. Epub 2014 Oct 9. doi: 10.1109/TCYB.2014.2358647

Author

Han, Junwei ; Chen, Changyuan ; Shao, Ling et al. / Learning Computational Models of Video Memorability from fMRI Brain Imaging. In: IEEE Transactions on Cybernetics. 2015 ; Vol. 45, No. 8. pp. 1692-1703.

Bibtex

@article{3b5c231ccdf44506a188171173d15617,
title = "Learning Computational Models of Video Memorability from fMRI Brain Imaging",
abstract = "Generally, various visual media are unequally memorable by the human brain. This paper looks into a new direction of modeling the memorability of video clips and automatically predicting how memorable they are by learning from brain functional magnetic resonance imaging (fMRI). We propose a novel computational framework by integrating the power of low-level audiovisual features and brain activity decoding via fMRI. Initially, a user study experiment is performed to create a ground truth database for measuring video memorability and a set of effective low-level audiovisual features is examined in this database. Then, human subjects' brain fMRI data are obtained when they are watching the video clips. The fMRI-derived features that convey the brain activity of memorizing videos are extracted using a universal brain reference system. Finally, due to the fact that fMRI scanning is expensive and time-consuming, a computational model is learned on our benchmark dataset with the objective of maximizing the correlation between the low-level audiovisual features and the fMRI-derived features using joint subspace learning. The learned model can then automatically predict the memorability of videos without fMRI scans. Evaluations on publically available image and video databases demonstrate the effectiveness of the proposed framework.",
author = "Junwei Han and Changyuan Chen and Ling Shao and Xintao Hu and Jungong Han and Tianming Liu",
year = "2015",
month = aug,
doi = "10.1109/TCYB.2014.2358647",
language = "English",
volume = "45",
pages = "1692--1703",
journal = "IEEE Transactions on Cybernetics",
issn = "2168-2267",
publisher = "IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC",
number = "8",

}

RIS

TY - JOUR

T1 - Learning Computational Models of Video Memorability from fMRI Brain Imaging

AU - Han, Junwei

AU - Chen, Changyuan

AU - Shao, Ling

AU - Hu, Xintao

AU - Han, Jungong

AU - Liu, Tianming

PY - 2015/8

Y1 - 2015/8

N2 - Generally, various visual media are unequally memorable by the human brain. This paper looks into a new direction of modeling the memorability of video clips and automatically predicting how memorable they are by learning from brain functional magnetic resonance imaging (fMRI). We propose a novel computational framework by integrating the power of low-level audiovisual features and brain activity decoding via fMRI. Initially, a user study experiment is performed to create a ground truth database for measuring video memorability and a set of effective low-level audiovisual features is examined in this database. Then, human subjects' brain fMRI data are obtained when they are watching the video clips. The fMRI-derived features that convey the brain activity of memorizing videos are extracted using a universal brain reference system. Finally, due to the fact that fMRI scanning is expensive and time-consuming, a computational model is learned on our benchmark dataset with the objective of maximizing the correlation between the low-level audiovisual features and the fMRI-derived features using joint subspace learning. The learned model can then automatically predict the memorability of videos without fMRI scans. Evaluations on publically available image and video databases demonstrate the effectiveness of the proposed framework.

AB - Generally, various visual media are unequally memorable by the human brain. This paper looks into a new direction of modeling the memorability of video clips and automatically predicting how memorable they are by learning from brain functional magnetic resonance imaging (fMRI). We propose a novel computational framework by integrating the power of low-level audiovisual features and brain activity decoding via fMRI. Initially, a user study experiment is performed to create a ground truth database for measuring video memorability and a set of effective low-level audiovisual features is examined in this database. Then, human subjects' brain fMRI data are obtained when they are watching the video clips. The fMRI-derived features that convey the brain activity of memorizing videos are extracted using a universal brain reference system. Finally, due to the fact that fMRI scanning is expensive and time-consuming, a computational model is learned on our benchmark dataset with the objective of maximizing the correlation between the low-level audiovisual features and the fMRI-derived features using joint subspace learning. The learned model can then automatically predict the memorability of videos without fMRI scans. Evaluations on publically available image and video databases demonstrate the effectiveness of the proposed framework.

U2 - 10.1109/TCYB.2014.2358647

DO - 10.1109/TCYB.2014.2358647

M3 - Journal article

VL - 45

SP - 1692

EP - 1703

JO - IEEE Transactions on Cybernetics

JF - IEEE Transactions on Cybernetics

SN - 2168-2267

IS - 8

ER -