Home > Research > Publications & Outputs > Bringing replication and reproduction together ...

Electronic data

  • bringing_replication_reproduction

    Submitted manuscript, 534 KB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

  • bringing-replication-reproduction

    Accepted author manuscript, 535 KB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

View graph of relations

Bringing replication and reproduction together with generalisability in NLP: Three reproduction studies for Target Dependent Sentiment Analysis

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Publication date20/08/2018
Host publicationProceedings of the 27th International Conference on Computational Linguistics
Place of PublicationSanta Fe, New Mexico, USA
PublisherAssociation for Computational Linguistics
Pages1132-1134
Number of pages13
ISBN (electronic)9781948087506
<mark>Original language</mark>English
EventConference on Computational Linguistics - Santa Fe Community Convention Center, Santa Fe, United States
Duration: 20/08/201826/08/2018
Conference number: 27
https://coling2018.org/

Conference

ConferenceConference on Computational Linguistics
Abbreviated titleCOLING
Country/TerritoryUnited States
CitySanta Fe
Period20/08/1826/08/18
Internet address

Conference

ConferenceConference on Computational Linguistics
Abbreviated titleCOLING
Country/TerritoryUnited States
CitySanta Fe
Period20/08/1826/08/18
Internet address

Abstract

Lack of repeatability and generalisability are two significant threats to continuing scientific development in Natural Language Processing. Language models and learning methods are so complex that scientific conference papers no longer contain enough space for the technical depth required for replication or reproduction. Taking Target Dependent Sentiment Analysis as a case study, we show how recent work in the field has not consistently released code, or described settings for learning methods in enough detail, and lacks comparability and generalisability in train, test or validation data. To investigate generalisability and to enable state of the art comparative evaluations, we carry out the first reproduction studies of three groups of complementary methods and perform the first large-scale mass evaluation on six different English datasets. Reflecting on our experiences, we recommend that future replication or reproduction experiments should always consider a variety of datasets alongside documenting and releasing their methods and published code in order to minimise the barriers to both repeatability and generalisability. We have released our code with a model zoo on GitHub with Jupyter Notebooks to aid understanding and full documentation, and we recommend that others do the same with their papers at submission time through an anonymised GitHub account.