Home > Research > Publications & Outputs > Scaling out for extreme scale corpus data

Electronic data

  • extreme-scale-corpus

    190 KB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Scaling out for extreme scale corpus data

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Scaling out for extreme scale corpus data. / Coole, Matt; Rayson, Paul Edward; Mariani, John Amedeo.
Big Data (Big Data), 2015 IEEE International Conference on. IEEE, 2015. p. 1643-1649.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Coole, M, Rayson, PE & Mariani, JA 2015, Scaling out for extreme scale corpus data. in Big Data (Big Data), 2015 IEEE International Conference on. IEEE, pp. 1643-1649. https://doi.org/10.1109/BigData.2015.7363933

APA

Coole, M., Rayson, P. E., & Mariani, J. A. (2015). Scaling out for extreme scale corpus data. In Big Data (Big Data), 2015 IEEE International Conference on (pp. 1643-1649). IEEE. https://doi.org/10.1109/BigData.2015.7363933

Vancouver

Coole M, Rayson PE, Mariani JA. Scaling out for extreme scale corpus data. In Big Data (Big Data), 2015 IEEE International Conference on. IEEE. 2015. p. 1643-1649 doi: 10.1109/BigData.2015.7363933

Author

Coole, Matt ; Rayson, Paul Edward ; Mariani, John Amedeo. / Scaling out for extreme scale corpus data. Big Data (Big Data), 2015 IEEE International Conference on. IEEE, 2015. pp. 1643-1649

Bibtex

@inproceedings{05dd0178341541a5904900bbed6fa554,
title = "Scaling out for extreme scale corpus data",
abstract = "Much of the previous work in Big Data has focussed on numerical sources of information. However, with the `narrative turn' in many disciplines gathering pace and commercial organisations beginning to realise the value of their textual assets, natural language data is fast catching up as an exploitable source of information for decision making. With vast quantities of unstructured textual data on the web, in social media, and in newly digitised historical document archives, the 5Vs (Volume, Velocity, Variety, Value and Veracity) apply equally well, if not more so, to big textual data. Corpus linguistics, the computer-aided study of large collections of naturally occurring language data, has been dealing with big data for fifty years. Corpus linguistics methods impose complex requirements on the retrieval, annotation and analysis of text in terms of displaying narrow contexts for each occurrence of a word or linguistic feature being studied and counting co-occurrences with other words or features to determine significant patterns in language. This, coupled with the distribution of language features in accordance with Zipf's Law, poses complex challenges for data models and corpus software dealing with extreme scale language data. A related issue is the non-random nature of language and the `burstiness' of word occurrences, or what we might put in Big Data terms as a sixth `V' called Viscosity. We report experiments to examine and compare the capabilities of two No-SQL databases in clustered configurations for the indexing, retrieval and analysis of billion-word corpora, since this size is the current state-of-the-art in corpus linguistics. We find that modern DBMSs (Database Management Systems) are capable of handling this extreme scale corpus data set for simple queries but are limited when querying for more frequent words or more complex queries.",
author = "Matt Coole and Rayson, {Paul Edward} and Mariani, {John Amedeo}",
year = "2015",
doi = "10.1109/BigData.2015.7363933",
language = "English",
isbn = "9781479999255",
pages = "1643--1649",
booktitle = "Big Data (Big Data), 2015 IEEE International Conference on",
publisher = "IEEE",

}

RIS

TY - GEN

T1 - Scaling out for extreme scale corpus data

AU - Coole, Matt

AU - Rayson, Paul Edward

AU - Mariani, John Amedeo

PY - 2015

Y1 - 2015

N2 - Much of the previous work in Big Data has focussed on numerical sources of information. However, with the `narrative turn' in many disciplines gathering pace and commercial organisations beginning to realise the value of their textual assets, natural language data is fast catching up as an exploitable source of information for decision making. With vast quantities of unstructured textual data on the web, in social media, and in newly digitised historical document archives, the 5Vs (Volume, Velocity, Variety, Value and Veracity) apply equally well, if not more so, to big textual data. Corpus linguistics, the computer-aided study of large collections of naturally occurring language data, has been dealing with big data for fifty years. Corpus linguistics methods impose complex requirements on the retrieval, annotation and analysis of text in terms of displaying narrow contexts for each occurrence of a word or linguistic feature being studied and counting co-occurrences with other words or features to determine significant patterns in language. This, coupled with the distribution of language features in accordance with Zipf's Law, poses complex challenges for data models and corpus software dealing with extreme scale language data. A related issue is the non-random nature of language and the `burstiness' of word occurrences, or what we might put in Big Data terms as a sixth `V' called Viscosity. We report experiments to examine and compare the capabilities of two No-SQL databases in clustered configurations for the indexing, retrieval and analysis of billion-word corpora, since this size is the current state-of-the-art in corpus linguistics. We find that modern DBMSs (Database Management Systems) are capable of handling this extreme scale corpus data set for simple queries but are limited when querying for more frequent words or more complex queries.

AB - Much of the previous work in Big Data has focussed on numerical sources of information. However, with the `narrative turn' in many disciplines gathering pace and commercial organisations beginning to realise the value of their textual assets, natural language data is fast catching up as an exploitable source of information for decision making. With vast quantities of unstructured textual data on the web, in social media, and in newly digitised historical document archives, the 5Vs (Volume, Velocity, Variety, Value and Veracity) apply equally well, if not more so, to big textual data. Corpus linguistics, the computer-aided study of large collections of naturally occurring language data, has been dealing with big data for fifty years. Corpus linguistics methods impose complex requirements on the retrieval, annotation and analysis of text in terms of displaying narrow contexts for each occurrence of a word or linguistic feature being studied and counting co-occurrences with other words or features to determine significant patterns in language. This, coupled with the distribution of language features in accordance with Zipf's Law, poses complex challenges for data models and corpus software dealing with extreme scale language data. A related issue is the non-random nature of language and the `burstiness' of word occurrences, or what we might put in Big Data terms as a sixth `V' called Viscosity. We report experiments to examine and compare the capabilities of two No-SQL databases in clustered configurations for the indexing, retrieval and analysis of billion-word corpora, since this size is the current state-of-the-art in corpus linguistics. We find that modern DBMSs (Database Management Systems) are capable of handling this extreme scale corpus data set for simple queries but are limited when querying for more frequent words or more complex queries.

U2 - 10.1109/BigData.2015.7363933

DO - 10.1109/BigData.2015.7363933

M3 - Conference contribution/Paper

SN - 9781479999255

SP - 1643

EP - 1649

BT - Big Data (Big Data), 2015 IEEE International Conference on

PB - IEEE

ER -