Home > Research > Publications & Outputs > Overview of the First Workshop on Language Mode...

Electronic data

  • 2025.loreslm-1.1

    Final published version, 253 KB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

View graph of relations

Overview of the First Workshop on Language Models for Low-Resource Languages (LoResLM 2025)

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Publication date20/01/2025
Host publicationProceedings of the First Workshop on Language Models for Low-Resource Languages
PublisherAssociation for Computational Linguistics
ISBN (electronic)9798891762152
<mark>Original language</mark>English
EventThe First Workshop on Language Models for
Low-Resource Languages
- Abu Dhabi, United Arab Emirates
Duration: 20/01/2025 → …
https://loreslm.github.io/

Workshop

WorkshopThe First Workshop on Language Models for
Low-Resource Languages
Abbreviated titleLoResLM 2025
Country/TerritoryUnited Arab Emirates
CityAbu Dhabi
Period20/01/25 → …
Internet address

Workshop

WorkshopThe First Workshop on Language Models for
Low-Resource Languages
Abbreviated titleLoResLM 2025
Country/TerritoryUnited Arab Emirates
CityAbu Dhabi
Period20/01/25 → …
Internet address

Abstract

The first Workshop on Language Models for Low-Resource Languages (LoResLM 2025) was held in conjunction with the 31st International Conference on Computational Linguistics (COLING 2025) in Abu Dhabi, United Arab Emirates. This workshop mainly aimed to provide a forum for researchers to share and discuss their ongoing work on language models (LMs) focusing on low-resource languages, following the recent advancements in neural language models and their linguistic biases towards high-resource languages. LoResLM 2025 attracted notable interest from the natural language processing (NLP) community, resulting in 35 accepted papers from 52 submissions. These contributions cover a broad range of low-resource languages from eight language families and 13 diverse research areas, paving the way for future possibilities and promoting linguistic inclusivity in NLP.