Home > Research > Publications & Outputs > Vicarious Offense and Noise Audit of Offensive ...

Links

Text available via DOI:

View graph of relations

Vicarious Offense and Noise Audit of Offensive Speech Classifiers: Unifying Human and Machine Disagreement on What is Offensive

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
  • Tharindu Weerasooriya
  • Sujan Dutta
  • Tharindu Ranasinghe
  • Marcos Zampieri
  • Christopher Homan
  • Ashiqur KhudaBukhsh
Close
Publication date6/12/2023
Host publicationProceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Place of PublicationStroudsburg, PA
PublisherAssociation for Computational Linguistics
Pages11648-11668
Number of pages21
ISBN (electronic)9798891760608
<mark>Original language</mark>English
EventThe 2023 Conference on Empirical Methods in Natural Language Processing - Singapore, Singapore
Duration: 6/12/202310/12/2023
https://2023.emnlp.org/

Conference

ConferenceThe 2023 Conference on Empirical Methods in Natural Language Processing
Abbreviated titleEMNLP 2023
Country/TerritorySingapore
CitySingapore
Period6/12/2310/12/23
Internet address

Conference

ConferenceThe 2023 Conference on Empirical Methods in Natural Language Processing
Abbreviated titleEMNLP 2023
Country/TerritorySingapore
CitySingapore
Period6/12/2310/12/23
Internet address

Abstract

Offensive speech detection is a key component of content moderation. However, what is offensive can be highly subjective. This paper investigates how machine and human moderators disagree on what is offensive when it comes to real-world social web political discourse. We show that (1) there is extensive disagreement among the moderators (humans and machines); and (2) human and large-language-model classifiers are unable to predict how other human raters will respond, based on their political leanings. For (1), we conduct a ***noise audit*** at an unprecedented scale that combines both machine and human responses. For (2), we introduce a first-of-its-kind dataset of ***vicarious offense***. Our noise audit reveals that moderation outcomes vary wildly across different machine moderators. Our experiments with human moderators suggest that political leanings combined with sensitive issues affect both first-person and vicarious offense. The dataset is available through https://github.com/Homan-Lab/voiced.