Home > Research > Publications & Outputs > Towards Generalized Offensive Language Identifi...

Electronic data

  • 2407.18738v1

    Accepted author manuscript, 482 KB, PDF document

View graph of relations

Towards Generalized Offensive Language Identification

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

E-pub ahead of print
Close
Publication date5/09/2024
Host publicationProceedings of the the 16th International Conference on Advances in Social Networks Analysis and Mining
Place of PublicationCham
PublisherSpringer Nature
<mark>Original language</mark>English
EventThe 16th International Conference on Advances in Social Networks Analysis and Mining - University of Calabria, Rende (CS), Calabria, Italy
Duration: 2/09/20245/09/2024
https://asonam.cpsc.ucalgary.ca/2024/

Conference

ConferenceThe 16th International Conference on Advances in Social Networks Analysis and Mining
Abbreviated titleASONAM-2024
Country/TerritoryItaly
CityCalabria
Period2/09/245/09/24
Internet address

Conference

ConferenceThe 16th International Conference on Advances in Social Networks Analysis and Mining
Abbreviated titleASONAM-2024
Country/TerritoryItaly
CityCalabria
Period2/09/245/09/24
Internet address

Abstract

The prevalence of offensive content on the internet, encompassing hate speech and cyberbullying, is a pervasive issue worldwide. Consequently, it has garnered significant attention from the machine learning (ML) and natural language processing (NLP) communities. As a result, numerous systems have been developed to automatically identify potentially harmful content and mitigate its impact. These systems can follow two approaches; (1) Use publicly available models and application endpoints, including prompting large language models (LLMs) (2) Annotate datasets and train ML models on them. However, both approaches lack an understanding of how generalizable they are. Furthermore, the applicability of these systems is often questioned in off-domain and practical environments. This paper empirically evaluates the generalizability of offensive language detection models and datasets across a novel generalized benchmark. We answer three research questions on generalizability. Our findings will be useful in creating robust real-world offensive language detection systems.