Home > Research > Publications & Outputs > A Survey of Multimodal Sarcasm Detection

Electronic data

Links

Text available via DOI:

View graph of relations

A Survey of Multimodal Sarcasm Detection

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Close
Publication date6/08/2024
Host publicationProceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
EditorsKate Larson
Place of PublicationJeju
PublisherInternational Joint Conferences on Artificial Intelligence Organization
Pages8020-8028
Number of pages9
ISBN (electronic)9781956792041
<mark>Original language</mark>English
EventThe 33rd International Joint Conference on Artificial Intelligence - Jeju, Korea, Republic of
Duration: 3/08/20249/08/2024

Conference

ConferenceThe 33rd International Joint Conference on Artificial Intelligence
Abbreviated titleIJCAI 2024
Country/TerritoryKorea, Republic of
CityJeju
Period3/08/249/08/24

Conference

ConferenceThe 33rd International Joint Conference on Artificial Intelligence
Abbreviated titleIJCAI 2024
Country/TerritoryKorea, Republic of
CityJeju
Period3/08/249/08/24

Abstract

Sarcasm is a rhetorical device that is used to convey the opposite of the literal meaning of an utterance. Sarcasm is widely used on social media and other forms of computer-mediated communication motivating the use of computational models to identify it automatically. While the clear majority of approaches to sarcasm detection have been carried out on text only, sarcasm detection often requires additional information present in tonality, facial expression, and contextual images. This has led to the introduction of multimodal models, opening the possibility to detect sarcasm in multiple modalities such as audio, images, text, and video. In this paper, we present the first comprehensive survey on multimodal sarcasm detection - henceforth MSD - to date. We survey papers published between 2018 and 2023 on the topic, and discuss the models and datasets used for this task. We also present future research directions in MSD.