Final published version
Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSN › Conference contribution/Paper › peer-review
Publication date | 15/09/2017 |
---|---|
Host publication | Social Informatics - 9th International Conference, SocInfo 2017, Proceedings |
Editors | Giovanni Luca Ciampaglia, Taha Yasseri, Afra Mashhadi |
Publisher | Springer-Verlag |
Pages | 323-340 |
Number of pages | 18 |
ISBN (print) | 9783319672168 |
<mark>Original language</mark> | English |
Event | 9th International Conference on Social Informatics, SocInfo 2017 - Oxford, United Kingdom Duration: 13/09/2017 → 15/09/2017 |
Conference | 9th International Conference on Social Informatics, SocInfo 2017 |
---|---|
Country/Territory | United Kingdom |
City | Oxford |
Period | 13/09/17 → 15/09/17 |
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Volume | 10539 LNCS |
ISSN (Print) | 0302-9743 |
ISSN (electronic) | 1611-3349 |
Conference | 9th International Conference on Social Informatics, SocInfo 2017 |
---|---|
Country/Territory | United Kingdom |
City | Oxford |
Period | 13/09/17 → 15/09/17 |
Humans upload over 1.8 billion digital images to the internet each day, yet the relationship between the images that a person shares with others and his/her psychological characteristics remains poorly understood. In the current research, we analyze the relationship between images, captions, and the latent demographic/psychological dimensions of personality and gender. We consider a wide range of automatically extracted visual and textual features of images/captions that are shared by a large sample of individuals Using correlational methods, we identify several visual and textual properties that show strong relationships with individual differences between participants. Additionally, we explore the task of predicting user attributes using a multimodal approach that simultaneously leverages images and their captions. Results from these experiments suggest that images alone have significant predictive power and, additionally, multimodal methods outperform both visual features and textual features in isolation when attempting to predict individual differences.