Home > Research > Publications & Outputs > CLEU- A Cross-Language-Urdu Corpus and Benchmar...

Links

Text available via DOI:

View graph of relations

CLEU- A Cross-Language-Urdu Corpus and Benchmark For Text Reuse Experiments

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
Close
<mark>Journal publication date</mark>1/07/2019
<mark>Journal</mark>Journal of the Association for Information Science and Technology
Issue number7
Volume70
Number of pages13
Pages (from-to)729-741
Publication StatusPublished
Early online date19/11/18
<mark>Original language</mark>English

Abstract

Text reuse is becoming a serious issue in many fields and research shows that it is much harder to detect when it occurs across languages. The recent rise in multi‐lingual content on the Web has increased cross‐language text reuse to an unprecedented scale. Although researchers have proposed methods to detect it, one major drawback is the unavailability of large‐scale gold standard evaluation resources built on real cases. To overcome this problem, we propose a cross‐language sentence/passage level text reuse corpus for the English‐Urdu language pair. The Cross‐Language English‐Urdu Corpus (CLEU) has source text in English whereas the derived text is in Urdu. It contains in total 3,235 sentence/passage pairs manually tagged into three categories that is near copy, paraphrased copy, and independently written. Further, as a second contribution, we evaluate the Translation plus Mono‐lingual Analysis method using three sets of experiments on the proposed dataset to highlight its usefulness. Evaluation results (f1=0.732 binary, f1=0.552 ternary classification) indicate that it is harder to detect cross‐language real cases of text reuse, especially when the language pairs have unrelated scripts. The corpus is a useful benchmark resource for the future development and assessment of cross‐language text reuse detection systems for the English‐Urdu language pair.