Home > Research > Publications & Outputs > Multimodal analysis and prediction of latent us...

Links

Text available via DOI:

View graph of relations

Multimodal analysis and prediction of latent user dimensions

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Multimodal analysis and prediction of latent user dimensions. / Wendlandt, Laura; Mihalcea, Rada; Boyd, Ryan L. et al.
Social Informatics - 9th International Conference, SocInfo 2017, Proceedings. ed. / Giovanni Luca Ciampaglia; Taha Yasseri; Afra Mashhadi. Springer-Verlag, 2017. p. 323-340 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 10539 LNCS).

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Wendlandt, L, Mihalcea, R, Boyd, RL & Pennebaker, JW 2017, Multimodal analysis and prediction of latent user dimensions. in GL Ciampaglia, T Yasseri & A Mashhadi (eds), Social Informatics - 9th International Conference, SocInfo 2017, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 10539 LNCS, Springer-Verlag, pp. 323-340, 9th International Conference on Social Informatics, SocInfo 2017, Oxford, United Kingdom, 13/09/17. https://doi.org/10.1007/978-3-319-67217-5_20

APA

Wendlandt, L., Mihalcea, R., Boyd, R. L., & Pennebaker, J. W. (2017). Multimodal analysis and prediction of latent user dimensions. In G. L. Ciampaglia, T. Yasseri, & A. Mashhadi (Eds.), Social Informatics - 9th International Conference, SocInfo 2017, Proceedings (pp. 323-340). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 10539 LNCS). Springer-Verlag. https://doi.org/10.1007/978-3-319-67217-5_20

Vancouver

Wendlandt L, Mihalcea R, Boyd RL, Pennebaker JW. Multimodal analysis and prediction of latent user dimensions. In Ciampaglia GL, Yasseri T, Mashhadi A, editors, Social Informatics - 9th International Conference, SocInfo 2017, Proceedings. Springer-Verlag. 2017. p. 323-340. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). Epub 2017 Sept 3. doi: 10.1007/978-3-319-67217-5_20

Author

Wendlandt, Laura ; Mihalcea, Rada ; Boyd, Ryan L. et al. / Multimodal analysis and prediction of latent user dimensions. Social Informatics - 9th International Conference, SocInfo 2017, Proceedings. editor / Giovanni Luca Ciampaglia ; Taha Yasseri ; Afra Mashhadi. Springer-Verlag, 2017. pp. 323-340 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).

Bibtex

@inproceedings{f99fe3eef13b470bbf4cc79f7682d4c7,
title = "Multimodal analysis and prediction of latent user dimensions",
abstract = "Humans upload over 1.8 billion digital images to the internet each day, yet the relationship between the images that a person shares with others and his/her psychological characteristics remains poorly understood. In the current research, we analyze the relationship between images, captions, and the latent demographic/psychological dimensions of personality and gender. We consider a wide range of automatically extracted visual and textual features of images/captions that are shared by a large sample of individuals Using correlational methods, we identify several visual and textual properties that show strong relationships with individual differences between participants. Additionally, we explore the task of predicting user attributes using a multimodal approach that simultaneously leverages images and their captions. Results from these experiments suggest that images alone have significant predictive power and, additionally, multimodal methods outperform both visual features and textual features in isolation when attempting to predict individual differences.",
keywords = "Analysis of latent user dimensions, Joint language/vision models, Multimodal prediction",
author = "Laura Wendlandt and Rada Mihalcea and Boyd, {Ryan L.} and Pennebaker, {James W.}",
year = "2017",
month = sep,
day = "15",
doi = "10.1007/978-3-319-67217-5_20",
language = "English",
isbn = "9783319672168",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer-Verlag",
pages = "323--340",
editor = "Ciampaglia, {Giovanni Luca} and Taha Yasseri and Afra Mashhadi",
booktitle = "Social Informatics - 9th International Conference, SocInfo 2017, Proceedings",
note = "9th International Conference on Social Informatics, SocInfo 2017 ; Conference date: 13-09-2017 Through 15-09-2017",

}

RIS

TY - GEN

T1 - Multimodal analysis and prediction of latent user dimensions

AU - Wendlandt, Laura

AU - Mihalcea, Rada

AU - Boyd, Ryan L.

AU - Pennebaker, James W.

PY - 2017/9/15

Y1 - 2017/9/15

N2 - Humans upload over 1.8 billion digital images to the internet each day, yet the relationship between the images that a person shares with others and his/her psychological characteristics remains poorly understood. In the current research, we analyze the relationship between images, captions, and the latent demographic/psychological dimensions of personality and gender. We consider a wide range of automatically extracted visual and textual features of images/captions that are shared by a large sample of individuals Using correlational methods, we identify several visual and textual properties that show strong relationships with individual differences between participants. Additionally, we explore the task of predicting user attributes using a multimodal approach that simultaneously leverages images and their captions. Results from these experiments suggest that images alone have significant predictive power and, additionally, multimodal methods outperform both visual features and textual features in isolation when attempting to predict individual differences.

AB - Humans upload over 1.8 billion digital images to the internet each day, yet the relationship between the images that a person shares with others and his/her psychological characteristics remains poorly understood. In the current research, we analyze the relationship between images, captions, and the latent demographic/psychological dimensions of personality and gender. We consider a wide range of automatically extracted visual and textual features of images/captions that are shared by a large sample of individuals Using correlational methods, we identify several visual and textual properties that show strong relationships with individual differences between participants. Additionally, we explore the task of predicting user attributes using a multimodal approach that simultaneously leverages images and their captions. Results from these experiments suggest that images alone have significant predictive power and, additionally, multimodal methods outperform both visual features and textual features in isolation when attempting to predict individual differences.

KW - Analysis of latent user dimensions

KW - Joint language/vision models

KW - Multimodal prediction

U2 - 10.1007/978-3-319-67217-5_20

DO - 10.1007/978-3-319-67217-5_20

M3 - Conference contribution/Paper

AN - SCOPUS:85029534370

SN - 9783319672168

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 323

EP - 340

BT - Social Informatics - 9th International Conference, SocInfo 2017, Proceedings

A2 - Ciampaglia, Giovanni Luca

A2 - Yasseri, Taha

A2 - Mashhadi, Afra

PB - Springer-Verlag

T2 - 9th International Conference on Social Informatics, SocInfo 2017

Y2 - 13 September 2017 through 15 September 2017

ER -