Home > Research > Publications & Outputs > Rater Cohesion and Quality from a Vicarious Per...

Links

View graph of relations

Rater Cohesion and Quality from a Vicarious Perspective

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
  • Deepak Panditha
  • Tharindu Weerasooriya
  • Sujan Dutta
  • Sarah Luger
  • Tharindu Ranasinghe
  • Ashiqur KhudaBukhsh
  • Marcos Zampieri
  • Christopher Homan
Close
Publication date12/11/2024
Host publicationFindings of the Association for Computational Linguistics: EMNLP 2024
EditorsYasal Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Place of PublicationKerrville, Texas
PublisherAssociation for Computational Linguistics (ACL Anthology)
Pages5149-5162
Number of pages14
ISBN (electronic)9798891761681
<mark>Original language</mark>English
EventThe 2024 Conference on Empirical Methods in Natural Language Processing - Miami, United States
Duration: 12/11/202416/11/2024
https://2024.emnlp.org/

Conference

ConferenceThe 2024 Conference on Empirical Methods in Natural Language Processing
Country/TerritoryUnited States
CityMiami
Period12/11/2416/11/24
Internet address

Conference

ConferenceThe 2024 Conference on Empirical Methods in Natural Language Processing
Country/TerritoryUnited States
CityMiami
Period12/11/2416/11/24
Internet address

Abstract

Human feedback is essential for building human-centered AI systems across domains where disagreement is prevalent, such as AI safety, content moderation, or sentiment analysis. Many disagreements, particularly in politically charged settings, arise because raters have opposing values or beliefs. Vicarious annotation is a method for breaking down disagreement by asking raters how they think others would annotate the data. In this paper, we explore the use of vicarious annotation with analytical methods for moderating rater disagreement. We employ rater cohesion metrics to study the potential influence of political affiliations and demographic backgrounds on raters’ perceptions of offense. Additionally, we utilize CrowdTruth’s rater quality metrics, which consider the demographics of the raters, to score the raters and their annotations. We study how the rater quality metrics influence the in-group and cross-group rater cohesion across the personal and vicarious levels.