Final published version
Licence: CC BY: Creative Commons Attribution 4.0 International License
Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSN › Conference contribution/Paper › peer-review
Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSN › Conference contribution/Paper › peer-review
}
TY - GEN
T1 - Vicarious Offense and Noise Audit of Offensive Speech Classifiers
T2 - The 2023 Conference on Empirical Methods in Natural Language Processing
AU - Weerasooriya, Tharindu
AU - Dutta, Sujan
AU - Ranasinghe, Tharindu
AU - Zampieri, Marcos
AU - Homan, Christopher
AU - KhudaBukhsh, Ashiqur
PY - 2023/12/6
Y1 - 2023/12/6
N2 - Offensive speech detection is a key component of content moderation. However, what is offensive can be highly subjective. This paper investigates how machine and human moderators disagree on what is offensive when it comes to real-world social web political discourse. We show that (1) there is extensive disagreement among the moderators (humans and machines); and (2) human and large-language-model classifiers are unable to predict how other human raters will respond, based on their political leanings. For (1), we conduct a ***noise audit*** at an unprecedented scale that combines both machine and human responses. For (2), we introduce a first-of-its-kind dataset of ***vicarious offense***. Our noise audit reveals that moderation outcomes vary wildly across different machine moderators. Our experiments with human moderators suggest that political leanings combined with sensitive issues affect both first-person and vicarious offense. The dataset is available through https://github.com/Homan-Lab/voiced.
AB - Offensive speech detection is a key component of content moderation. However, what is offensive can be highly subjective. This paper investigates how machine and human moderators disagree on what is offensive when it comes to real-world social web political discourse. We show that (1) there is extensive disagreement among the moderators (humans and machines); and (2) human and large-language-model classifiers are unable to predict how other human raters will respond, based on their political leanings. For (1), we conduct a ***noise audit*** at an unprecedented scale that combines both machine and human responses. For (2), we introduce a first-of-its-kind dataset of ***vicarious offense***. Our noise audit reveals that moderation outcomes vary wildly across different machine moderators. Our experiments with human moderators suggest that political leanings combined with sensitive issues affect both first-person and vicarious offense. The dataset is available through https://github.com/Homan-Lab/voiced.
U2 - 10.18653/v1/2023.emnlp-main.713
DO - 10.18653/v1/2023.emnlp-main.713
M3 - Conference contribution/Paper
SP - 11648
EP - 11668
BT - Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
PB - Association for Computational Linguistics
CY - Stroudsburg, PA
Y2 - 6 December 2023 through 10 December 2023
ER -