Home > Research > Publications & Outputs > Multi-task Learning of Negation and Speculation...

Electronic data

View graph of relations

Multi-task Learning of Negation and Speculation for Targeted Sentiment Classification

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Publication date23/05/2021
Host publicationThe 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Place of PublicationStroudsburg, Pa
PublisherAssociation for Computational Linguistics
Pages2838-2869
Number of pages32
ISBN (electronic)9781954085466
<mark>Original language</mark>English

Abstract

The majority of work in targeted sentiment analysis has concentrated on finding better methods to improve the overall results. Within this paper we show that these models are not robust to linguistic phenomena, specifically negation and speculation. In this paper, we propose a multi-task learning method to incorporate information from syntactic and semantic auxiliary tasks, including negation and speculation scope detection, to create English-language models that are more robust to these phenomena. Further we create two challenge datasets to evaluate model performance on negated and speculative samples. We find that multi-task models and transfer learning via language modelling can improve performance on these challenge datasets, but the overall performances indicate that there is still much room for improvement. We release both the datasets and the source code at https://github.com/jerbarnes/multitask_negation_for_targeted_sentiment.