Home > Research > Publications & Outputs > OffensEval 2023

Links

Text available via DOI:

View graph of relations

OffensEval 2023: Offensive language identification in the age of Large Language Models

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

OffensEval 2023: Offensive language identification in the age of Large Language Models. / Zampieri, Marcos; Rosenthal, Sara ; Nakov, Preslav et al.
In: Natural Language Engineering, Vol. 29, No. 6, 30.11.2023, p. 1416-1435.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Zampieri, M, Rosenthal, S, Nakov, P, Dmonte, A & Ranasinghe, T 2023, 'OffensEval 2023: Offensive language identification in the age of Large Language Models', Natural Language Engineering, vol. 29, no. 6, pp. 1416-1435. https://doi.org/10.1017/S1351324923000517

APA

Zampieri, M., Rosenthal, S., Nakov, P., Dmonte, A., & Ranasinghe, T. (2023). OffensEval 2023: Offensive language identification in the age of Large Language Models. Natural Language Engineering, 29(6), 1416-1435. https://doi.org/10.1017/S1351324923000517

Vancouver

Zampieri M, Rosenthal S, Nakov P, Dmonte A, Ranasinghe T. OffensEval 2023: Offensive language identification in the age of Large Language Models. Natural Language Engineering. 2023 Nov 30;29(6):1416-1435. doi: 10.1017/S1351324923000517

Author

Zampieri, Marcos ; Rosenthal, Sara ; Nakov, Preslav et al. / OffensEval 2023 : Offensive language identification in the age of Large Language Models. In: Natural Language Engineering. 2023 ; Vol. 29, No. 6. pp. 1416-1435.

Bibtex

@article{5adebfd6fa214dad884af77cb7bd5b09,
title = "OffensEval 2023: Offensive language identification in the age of Large Language Models",
abstract = "The OffensEval shared tasks organized as part of SemEval-2019–2020 were very popular, attracting over 1300 participating teams. The two editions of the shared task helped advance the state of the art in offensive language identification by providing the community with benchmark datasets in Arabic, Danish, English, Greek, and Turkish. The datasets were annotated using the OLID hierarchical taxonomy, which since then has become the de facto standard in general offensive language identification research and was widely used beyond OffensEval. We present a survey of OffensEval and related competitions, and we discuss the main lessons learned. We further evaluate the performance of Large Language Models (LLMs), which have recently revolutionalized the field of Natural Language Processing. We use zero-shot prompting with six popular LLMs and zero-shot learning with two task-specific fine-tuned BERT models, and we compare the results against those of the top-performing teams at the OffensEval competitions. Our results show that while some LMMs such as Flan-T5 achieve competitive performance, in general LLMs lag behind the best OffensEval systems.",
author = "Marcos Zampieri and Sara Rosenthal and Preslav Nakov and Alphaeus Dmonte and Tharindu Ranasinghe",
year = "2023",
month = nov,
day = "30",
doi = "10.1017/S1351324923000517",
language = "English",
volume = "29",
pages = "1416--1435",
journal = "Natural Language Engineering",
publisher = "Cambridge University Press",
number = "6",

}

RIS

TY - JOUR

T1 - OffensEval 2023

T2 - Offensive language identification in the age of Large Language Models

AU - Zampieri, Marcos

AU - Rosenthal, Sara

AU - Nakov, Preslav

AU - Dmonte, Alphaeus

AU - Ranasinghe, Tharindu

PY - 2023/11/30

Y1 - 2023/11/30

N2 - The OffensEval shared tasks organized as part of SemEval-2019–2020 were very popular, attracting over 1300 participating teams. The two editions of the shared task helped advance the state of the art in offensive language identification by providing the community with benchmark datasets in Arabic, Danish, English, Greek, and Turkish. The datasets were annotated using the OLID hierarchical taxonomy, which since then has become the de facto standard in general offensive language identification research and was widely used beyond OffensEval. We present a survey of OffensEval and related competitions, and we discuss the main lessons learned. We further evaluate the performance of Large Language Models (LLMs), which have recently revolutionalized the field of Natural Language Processing. We use zero-shot prompting with six popular LLMs and zero-shot learning with two task-specific fine-tuned BERT models, and we compare the results against those of the top-performing teams at the OffensEval competitions. Our results show that while some LMMs such as Flan-T5 achieve competitive performance, in general LLMs lag behind the best OffensEval systems.

AB - The OffensEval shared tasks organized as part of SemEval-2019–2020 were very popular, attracting over 1300 participating teams. The two editions of the shared task helped advance the state of the art in offensive language identification by providing the community with benchmark datasets in Arabic, Danish, English, Greek, and Turkish. The datasets were annotated using the OLID hierarchical taxonomy, which since then has become the de facto standard in general offensive language identification research and was widely used beyond OffensEval. We present a survey of OffensEval and related competitions, and we discuss the main lessons learned. We further evaluate the performance of Large Language Models (LLMs), which have recently revolutionalized the field of Natural Language Processing. We use zero-shot prompting with six popular LLMs and zero-shot learning with two task-specific fine-tuned BERT models, and we compare the results against those of the top-performing teams at the OffensEval competitions. Our results show that while some LMMs such as Flan-T5 achieve competitive performance, in general LLMs lag behind the best OffensEval systems.

U2 - 10.1017/S1351324923000517

DO - 10.1017/S1351324923000517

M3 - Journal article

VL - 29

SP - 1416

EP - 1435

JO - Natural Language Engineering

JF - Natural Language Engineering

IS - 6

ER -