Home > Research > Publications & Outputs > A Federated Learning Approach to Privacy Preser...

Electronic data

  • 2024.trac-1.2

    Final published version, 387 KB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Keywords

View graph of relations

A Federated Learning Approach to Privacy Preserving Offensive Language Identification

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Publication date20/05/2024
Host publicationTRAC-2024: The Fourth Workshop on Threat, Aggression & Cyberbullying @LREC-COLING-2024: Workshop proceedings
PublisherEuropean Language Resources Association (ELRA)
ISBN (print)9782493814470
<mark>Original language</mark>English
EventFourth Workshop on Threat, Aggression & Cyberbullying @ LREC-COLING-2024 -
Duration: 20/05/2024 → …

Workshop

WorkshopFourth Workshop on Threat, Aggression & Cyberbullying @ LREC-COLING-2024
Period20/05/24 → …

Workshop

WorkshopFourth Workshop on Threat, Aggression & Cyberbullying @ LREC-COLING-2024
Period20/05/24 → …

Abstract

The spread of various forms of offensive speech online is an important concern in social media. While platforms have been investing heavily in ways of coping with this problem, the question of privacy remains largely unaddressed. Models trained to detect offensive language on social media are trained and/or fine-tuned using large amounts of data often stored in centralized servers. Since most social media data originates from end users, we propose a privacy preserving decentralized architecture for identifying offensive language online by introducing Federated Learning (FL) in the context of offensive language identification. FL is a decentralized architecture that allows multiple models to be trained locally without the need for data sharing hence preserving users' privacy. We propose a model fusion approach to perform FL. We trained multiple deep learning models on four publicly available English benchmark datasets (AHSD, HASOC, HateXplain, OLID) and evaluated their performance in detail. We also present initial cross-lingual experiments in English and Spanish. We show that the proposed model fusion approach outperforms baselines in all the datasets while preserving privacy.

Bibliographic note

Accepted to TRAC 2024 (Fourth Workshop on Threat, Aggression and Cyberbullying) at LREC-COLING 2024 (The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation)