Home > Research > Publications & Outputs > Crogging (cross-validation aggregation) for for...

Links

Text available via DOI:

View graph of relations

Crogging (cross-validation aggregation) for forecasting - A novel algorithm of neural network ensembles on time series subsamples

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Publication date9/08/2013
Host publication2013 International Joint Conference on Neural Networks, IJCNN 2013
PublisherIEEE
ISBN (Electronic)9781467361293
<mark>Original language</mark>English
Externally publishedYes
Event2013 International Joint Conference on Neural Networks, IJCNN 2013 - Dallas, TX, United States
Duration: 4/08/20139/08/2013

Conference

Conference2013 International Joint Conference on Neural Networks, IJCNN 2013
Country/TerritoryUnited States
CityDallas, TX
Period4/08/139/08/13

Publication series

NameProceedings of the International Joint Conference on Neural Networks

Conference

Conference2013 International Joint Conference on Neural Networks, IJCNN 2013
Country/TerritoryUnited States
CityDallas, TX
Period4/08/139/08/13

Abstract

In classification, regression and time series prediction alike, cross-validation is widely employed to estimate the expected accuracy of a predictive algorithm by averaging predictive errors across mutually exclusive subsamples of the data. Similarly, bootstrapping aims to increase the validity of estimating the expected accuracy by repeatedly sub-sampling the data with replacement, creating overlapping samples of the data. Estimates are then used to anticipate of future risk in decision making, or to guide model selection where multiple candidates are feasible. Beyond error estimation, bootstrapping has recently been extended to combine each of the diverse models created for estimation, and aggregating over each of their predictions (rather than their errors), coined bootstrap aggregation or bagging. However, similar extensions of cross-validation to create diverse forecasting models have not been considered. In accordance with bagging, we propose to combine the benefits of cross-validation and forecast aggregation, i.e. crogging. We assesses different levels of cross-validation, including a (single-fold) hold-out approach, 2-fold and 10-fold cross validation and Monte-Carlos cross validation, to create diverse base-models of neural networks for time series prediction trained on different data subsets, and average their individual multiple-step ahead predictions. Results of forecasting the 111 time series of the NN3 competition indicate significant improvements accuracy through Crogging relative to Bagging or individual model selection of neural networks.

Bibliographic note

Copyright: Copyright 2014 Elsevier B.V., All rights reserved.