Home > Research > Publications & Outputs > What do Large Language Models Need for Machine ...

Links

View graph of relations

What do Large Language Models Need for Machine Translation Evaluation?

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
  • Shenbin Qian
  • Archchana Sindhujan
  • Minnie Kabra
  • Diptesh Kanojia
  • Constantin Orasan
  • Tharindu Ranasinghe
  • Frederic Blain
Close
Publication date9/11/2024
Host publicationProceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
EditorsYaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
PublisherAssociation for Computational Linguistics (ACL Anthology)
Pages3660-3674
Number of pages15
ISBN (electronic)9798891761643
<mark>Original language</mark>English
EventThe 2024 Conference on Empirical Methods in Natural Language Processing - Miami, United States
Duration: 12/11/202416/11/2024
https://2024.emnlp.org/

Conference

ConferenceThe 2024 Conference on Empirical Methods in Natural Language Processing
Country/TerritoryUnited States
CityMiami
Period12/11/2416/11/24
Internet address

Conference

ConferenceThe 2024 Conference on Empirical Methods in Natural Language Processing
Country/TerritoryUnited States
CityMiami
Period12/11/2416/11/24
Internet address

Abstract

Leveraging large language models (LLMs) for various natural language processing tasks has led to superlative claims about their performance. For the evaluation of machine translation (MT), existing research shows that LLMs are able to achieve results comparable to fine-tuned multilingual pre-trained language models. In this paper, we explore what translation information, such as the source, reference, translation errors and annotation guidelines, is needed for LLMs to evaluate MT quality. In addition, we investigate prompting techniques such as zero-shot, Chain of Thought (CoT) and few-shot prompting for eight language pairs covering high-, medium- and low-resource languages, leveraging varying LLM variants. Our findings indicate the importance of reference translations for an LLM-based evaluation. While larger models do not necessarily fare better, they tend to benefit more from CoT prompting, than smaller models. We also observe that LLMs do not always provide a numerical score when generating evaluations, which poses a question on their reliability for the task. Our work presents a comprehensive analysis for resource-constrained and training-less LLM-based evaluation of machine translation. We release the accrued prompt templates, code and data publicly for reproducibility.