Home > Research > Publications & Outputs > A comparison of AdaBoost algorithms for time se...

Electronic data

  • Postprint

    Rights statement: This is the author’s version of a work that was accepted for publication in International Journal of Forecasting. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in International Journal of Forecasting, 32, 4, 2016 DOI: 10.1016/j.ijforecast.2016.01.006

    Accepted author manuscript, 1.19 MB, PDF document

    Available under license: CC BY-NC-ND: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Links

Text available via DOI:

View graph of relations

A comparison of AdaBoost algorithms for time series forecast combination

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
<mark>Journal publication date</mark>1/10/2016
<mark>Journal</mark>International Journal of Forecasting
Issue number4
Volume32
Number of pages17
Pages (from-to)1103-1119
Publication StatusPublished
Early online date1/06/16
<mark>Original language</mark>English

Abstract

Recently, combination algorithms from machine learning classification have been extended to time series regression, most notably seven variants of the popular AdaBoost algorithm. Despite their theoretical promise their empirical accuracy in forecasting has not yet been assessed, either against each other or against any established approaches of forecast combination, model selection, or statistical benchmark algorithms. Also, none of the algorithms have been assessed on a representative set of empirical data, using only few synthetic time series. We remedy this omission by conducting a rigorous empirical evaluation using a representative set of 111 industry time series and a valid and reliable experimental design. We develop a full-factorial design over derived Boosting meta-parameters, creating 42 novel Boosting variants, and create a further 47 novel Boosting variants using research insights from forecast combination. Experiments show that only few Boosting meta-parameters increase accuracy, while meta-parameters derived from forecast combination research outperform others.

Bibliographic note

This is the author’s version of a work that was accepted for publication in International Journal of Forecasting. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in International Journal of Forecasting, 32, 4, 2016 DOI: 10.1016/j.ijforecast.2016.01.006