Home > Research > Publications & Outputs > Hindi Reading Comprehension

Electronic data

  • 2025.indonlp-1.1

    Final published version, 410 KB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

View graph of relations

Hindi Reading Comprehension: Do Large Language Models Exhibit Semantic Understanding?

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Publication date20/01/2025
Host publicationProceedings of the First Workshop on Natural Language Processing for Indo-Aryan and Dravidian Languages
EditorsRuvan Weerasinghe, Isuri Anuradha, Deshan Sumanathilaka
Place of PublicationAbu Dhabi
PublisherAssociation for Computational Linguistics
Pages1-10
Number of pages10
ISBN (electronic)9798891762145
<mark>Original language</mark>English

Abstract

In this study, we explore the performance of four advanced Generative AI models—GPT-3.5, GPT-4, Llama3, and HindiGPT, for the Hindi reading comprehension task. Using a zero-shot, instruction-based prompting strategy, we assess model responses through a comprehensive triple evaluation framework using the HindiRC dataset. Our framework combines (1) automatic evaluation using ROUGE, BLEU, BLEURT, METEOR, and Cosine Similarity; (2) rating-based assessments focussing on correctness, comprehension depth, and informativeness; and (3) preference-based selection to identify the best responses. Human ratings indicate that GPT-4 outperforms the other LLMs on all parameters, followed by HindiGPT, GPT-3.5, and then Llama3. Preference-based evaluation similarly placed GPT-4 (80%) as the best model, followed by HindiGPT(74%). However, automatic evaluation showed GPT-4 to be the lowest performer on n-gram metrics, yet the best performer on semantic metrics, suggesting it captures deeper meaning and semantic alignment over direct lexical overlap, which aligns with its strong human evaluation scores. This study also highlights that even though the models mostly address literal factual recall questions with high precision, they still face the challenge of specificity and interpretive bias at times.