Home > Research > Publications & Outputs > Improving first order temporal fact extraction ...

Electronic data

  • 141

    Rights statement: The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-50496-4_21

    Accepted author manuscript, 640 KB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Improving first order temporal fact extraction with unreliable data

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Improving first order temporal fact extraction with unreliable data. / Luo, Bingfeng; Feng, Yansong; Wang, Zheng et al.
Natural Language Understanding and Intelligent Applications: 5th CCF Conference on Natural Language Processing and Chinese Computing, NLPCC 2016, and 24th International Conference on Computer Processing of Oriental Languages, ICCPOL 2016, Kunming, China, December 2–6, 2016, Proceedings. ed. / Chin-Yew Lin; Nianwen Xue; Dongyan Zhao; Xuanjing Huang; Yansong Feng. Cham: Springer, 2016. p. 251-262 (Lecture Notes in Computer Science; Vol. 10102).

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Luo, B, Feng, Y, Wang, Z & Zhao, D 2016, Improving first order temporal fact extraction with unreliable data. in C-Y Lin, N Xue, D Zhao, X Huang & Y Feng (eds), Natural Language Understanding and Intelligent Applications: 5th CCF Conference on Natural Language Processing and Chinese Computing, NLPCC 2016, and 24th International Conference on Computer Processing of Oriental Languages, ICCPOL 2016, Kunming, China, December 2–6, 2016, Proceedings. Lecture Notes in Computer Science, vol. 10102, Springer, Cham, pp. 251-262. https://doi.org/10.1007/978-3-319-50496-4_21

APA

Luo, B., Feng, Y., Wang, Z., & Zhao, D. (2016). Improving first order temporal fact extraction with unreliable data. In C.-Y. Lin, N. Xue, D. Zhao, X. Huang, & Y. Feng (Eds.), Natural Language Understanding and Intelligent Applications: 5th CCF Conference on Natural Language Processing and Chinese Computing, NLPCC 2016, and 24th International Conference on Computer Processing of Oriental Languages, ICCPOL 2016, Kunming, China, December 2–6, 2016, Proceedings (pp. 251-262). (Lecture Notes in Computer Science; Vol. 10102). Springer. https://doi.org/10.1007/978-3-319-50496-4_21

Vancouver

Luo B, Feng Y, Wang Z, Zhao D. Improving first order temporal fact extraction with unreliable data. In Lin CY, Xue N, Zhao D, Huang X, Feng Y, editors, Natural Language Understanding and Intelligent Applications: 5th CCF Conference on Natural Language Processing and Chinese Computing, NLPCC 2016, and 24th International Conference on Computer Processing of Oriental Languages, ICCPOL 2016, Kunming, China, December 2–6, 2016, Proceedings. Cham: Springer. 2016. p. 251-262. (Lecture Notes in Computer Science). doi: 10.1007/978-3-319-50496-4_21

Author

Luo, Bingfeng ; Feng, Yansong ; Wang, Zheng et al. / Improving first order temporal fact extraction with unreliable data. Natural Language Understanding and Intelligent Applications: 5th CCF Conference on Natural Language Processing and Chinese Computing, NLPCC 2016, and 24th International Conference on Computer Processing of Oriental Languages, ICCPOL 2016, Kunming, China, December 2–6, 2016, Proceedings. editor / Chin-Yew Lin ; Nianwen Xue ; Dongyan Zhao ; Xuanjing Huang ; Yansong Feng. Cham : Springer, 2016. pp. 251-262 (Lecture Notes in Computer Science).

Bibtex

@inproceedings{3b216f18e35e40c4aa1f8c623337be26,
title = "Improving first order temporal fact extraction with unreliable data",
abstract = "In this paper, we deal with the task of extracting first order temporal facts from free text. This task is a subtask of relation extraction and it aims at extracting relations between entity and time.Currently, the field of relation extraction mainly focuses on extracting relations between entities. However, we observe that the multi-granular nature of time expressions can help us divide the dataset constructed by distant supervision to reliable and less reliable subsets, which can helpto improve the extraction results on relations between entity and time.We accordingly contribute the first dataset focusing on the first order temporal fact extraction task using distant supervision. To fully utilize both the reliable and the less reliable data, we propose to use curriculum learning to rearrange the training procedure, label dropout to make the model be more conservative about less reliable data, and instance attention to help the model distinguish important instances from unimportant ones. Experiments show that these methods help the model outperform the model trained purely on the reliable dataset as well as the model trained on the dataset where all subsets are mixed together.",
author = "Bingfeng Luo and Yansong Feng and Zheng Wang and Dongyan Zhao",
note = "The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-50496-4_21",
year = "2016",
month = dec,
day = "2",
doi = "10.1007/978-3-319-50496-4_21",
language = "English",
isbn = "9783319504957",
series = "Lecture Notes in Computer Science",
publisher = "Springer",
pages = "251--262",
editor = "Chin-Yew Lin and Nianwen Xue and Dongyan Zhao and Xuanjing Huang and Yansong Feng",
booktitle = "Natural Language Understanding and Intelligent Applications",

}

RIS

TY - GEN

T1 - Improving first order temporal fact extraction with unreliable data

AU - Luo, Bingfeng

AU - Feng, Yansong

AU - Wang, Zheng

AU - Zhao, Dongyan

N1 - The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-50496-4_21

PY - 2016/12/2

Y1 - 2016/12/2

N2 - In this paper, we deal with the task of extracting first order temporal facts from free text. This task is a subtask of relation extraction and it aims at extracting relations between entity and time.Currently, the field of relation extraction mainly focuses on extracting relations between entities. However, we observe that the multi-granular nature of time expressions can help us divide the dataset constructed by distant supervision to reliable and less reliable subsets, which can helpto improve the extraction results on relations between entity and time.We accordingly contribute the first dataset focusing on the first order temporal fact extraction task using distant supervision. To fully utilize both the reliable and the less reliable data, we propose to use curriculum learning to rearrange the training procedure, label dropout to make the model be more conservative about less reliable data, and instance attention to help the model distinguish important instances from unimportant ones. Experiments show that these methods help the model outperform the model trained purely on the reliable dataset as well as the model trained on the dataset where all subsets are mixed together.

AB - In this paper, we deal with the task of extracting first order temporal facts from free text. This task is a subtask of relation extraction and it aims at extracting relations between entity and time.Currently, the field of relation extraction mainly focuses on extracting relations between entities. However, we observe that the multi-granular nature of time expressions can help us divide the dataset constructed by distant supervision to reliable and less reliable subsets, which can helpto improve the extraction results on relations between entity and time.We accordingly contribute the first dataset focusing on the first order temporal fact extraction task using distant supervision. To fully utilize both the reliable and the less reliable data, we propose to use curriculum learning to rearrange the training procedure, label dropout to make the model be more conservative about less reliable data, and instance attention to help the model distinguish important instances from unimportant ones. Experiments show that these methods help the model outperform the model trained purely on the reliable dataset as well as the model trained on the dataset where all subsets are mixed together.

U2 - 10.1007/978-3-319-50496-4_21

DO - 10.1007/978-3-319-50496-4_21

M3 - Conference contribution/Paper

SN - 9783319504957

T3 - Lecture Notes in Computer Science

SP - 251

EP - 262

BT - Natural Language Understanding and Intelligent Applications

A2 - Lin, Chin-Yew

A2 - Xue, Nianwen

A2 - Zhao, Dongyan

A2 - Huang, Xuanjing

A2 - Feng, Yansong

PB - Springer

CY - Cham

ER -