Home > Research > Publications & Outputs > Can Model Fusing Help Transformers in Long Docu...

Electronic data

  • 2307.09532v1

    Final published version, 473 KB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

Keywords

View graph of relations

Can Model Fusing Help Transformers in Long Document Classification?: An Empirical Study

Research output: Working paperPreprint

Published

Standard

Harvard

APA

Vancouver

Author

Bibtex

@techreport{0357425466504baea78335a501c12bda,
title = "Can Model Fusing Help Transformers in Long Document Classification?: An Empirical Study",
abstract = "Text classification is an area of research which has been studied over the years in Natural Language Processing (NLP). Adapting NLP to multiple domains has introduced many new challenges for text classification and one of them is long document classification. While state-of-the-art transformer models provide excellent results in text classification, most of them have limitations in the maximum sequence length of the input sequence. The majority of the transformer models are limited to 512 tokens, and therefore, they struggle with long document classification problems. In this research, we explore on employing Model Fusing for long document classification while comparing the results with well-known BERT and Longformer architectures. ",
keywords = "cs.CL",
author = "Damith Premasiri and Tharindu Ranasinghe and Ruslan Mitkov",
note = "Accepted in RANLP 2023",
year = "2023",
month = jul,
day = "18",
language = "English",
publisher = "Arxiv",
type = "WorkingPaper",
institution = "Arxiv",

}

RIS

TY - UNPB

T1 - Can Model Fusing Help Transformers in Long Document Classification?

T2 - An Empirical Study

AU - Premasiri, Damith

AU - Ranasinghe, Tharindu

AU - Mitkov, Ruslan

N1 - Accepted in RANLP 2023

PY - 2023/7/18

Y1 - 2023/7/18

N2 - Text classification is an area of research which has been studied over the years in Natural Language Processing (NLP). Adapting NLP to multiple domains has introduced many new challenges for text classification and one of them is long document classification. While state-of-the-art transformer models provide excellent results in text classification, most of them have limitations in the maximum sequence length of the input sequence. The majority of the transformer models are limited to 512 tokens, and therefore, they struggle with long document classification problems. In this research, we explore on employing Model Fusing for long document classification while comparing the results with well-known BERT and Longformer architectures.

AB - Text classification is an area of research which has been studied over the years in Natural Language Processing (NLP). Adapting NLP to multiple domains has introduced many new challenges for text classification and one of them is long document classification. While state-of-the-art transformer models provide excellent results in text classification, most of them have limitations in the maximum sequence length of the input sequence. The majority of the transformer models are limited to 512 tokens, and therefore, they struggle with long document classification problems. In this research, we explore on employing Model Fusing for long document classification while comparing the results with well-known BERT and Longformer architectures.

KW - cs.CL

M3 - Preprint

BT - Can Model Fusing Help Transformers in Long Document Classification?

PB - Arxiv

ER -