Home > Research > Publications & Outputs > An evaluation of simple forecasting model selec...

Electronic data

View graph of relations

An evaluation of simple forecasting model selection rules

Research output: Working paper

Published

Standard

An evaluation of simple forecasting model selection rules. / Fildes, Robert; Petropoulos, Fotios.
Lancaster: The Department of Management Science, 2013. p. 1-32.

Research output: Working paper

Harvard

Fildes, R & Petropoulos, F 2013 'An evaluation of simple forecasting model selection rules' The Department of Management Science, Lancaster, pp. 1-32.

APA

Fildes, R., & Petropoulos, F. (2013). An evaluation of simple forecasting model selection rules. (pp. 1-32). The Department of Management Science.

Vancouver

Fildes R, Petropoulos F. An evaluation of simple forecasting model selection rules. Lancaster: The Department of Management Science. 2013 Apr, p. 1-32.

Author

Fildes, Robert ; Petropoulos, Fotios. / An evaluation of simple forecasting model selection rules. Lancaster : The Department of Management Science, 2013. pp. 1-32

Bibtex

@techreport{2914291b208742fc80685009dc92ffa2,
title = "An evaluation of simple forecasting model selection rules",
abstract = "A major problem for many organisational forecasters is to choose the appropriate forecasting method for a large number of data series. Model selection aims to identify the best method of forecasting for an individual series within the data set. Various selection rules have been proposed in order to enhance forecasting accuracy. In theory, model selection is appealing, as no single extrapolation method is better than all others for all series in an organizational data set. However, empirical results have demonstrated limited effectiveness of these often complex rules. The current study explores the circumstances under which model selection is beneficial. Three measures are examined for characterising the data series, namely predictability (in terms of the relative performance of the random walk but also a method, theta, that performs well), trend and seasonality in the series. In addition, the attributes of the data set and the methods also affect selection performance, including the size of the pools of methods under consideration, the stability of methods{\textquoteright} performance and the correlation between methods. In order to assess the efficacy of model selection in the cases considered, simple selection rules are proposed, based on within-sample best fit or best forecasting performance for different forecast horizons. Individual (per series) selection is contrasted against the simpler approach (aggregate selection), where one method is applied to all data series. Moreover, simple combination of methods also provides an operational benchmark. The analysis shows that individual selection works best when specific sub-populations of data are considered (trended or seasonal series), but also when methods{\textquoteright} relative performance is stable over time or no method is dominant across the data series. ",
author = "Robert Fildes and Fotios Petropoulos",
year = "2013",
month = apr,
language = "English",
pages = "1--32",
publisher = "The Department of Management Science",
type = "WorkingPaper",
institution = "The Department of Management Science",

}

RIS

TY - UNPB

T1 - An evaluation of simple forecasting model selection rules

AU - Fildes, Robert

AU - Petropoulos, Fotios

PY - 2013/4

Y1 - 2013/4

N2 - A major problem for many organisational forecasters is to choose the appropriate forecasting method for a large number of data series. Model selection aims to identify the best method of forecasting for an individual series within the data set. Various selection rules have been proposed in order to enhance forecasting accuracy. In theory, model selection is appealing, as no single extrapolation method is better than all others for all series in an organizational data set. However, empirical results have demonstrated limited effectiveness of these often complex rules. The current study explores the circumstances under which model selection is beneficial. Three measures are examined for characterising the data series, namely predictability (in terms of the relative performance of the random walk but also a method, theta, that performs well), trend and seasonality in the series. In addition, the attributes of the data set and the methods also affect selection performance, including the size of the pools of methods under consideration, the stability of methods’ performance and the correlation between methods. In order to assess the efficacy of model selection in the cases considered, simple selection rules are proposed, based on within-sample best fit or best forecasting performance for different forecast horizons. Individual (per series) selection is contrasted against the simpler approach (aggregate selection), where one method is applied to all data series. Moreover, simple combination of methods also provides an operational benchmark. The analysis shows that individual selection works best when specific sub-populations of data are considered (trended or seasonal series), but also when methods’ relative performance is stable over time or no method is dominant across the data series.

AB - A major problem for many organisational forecasters is to choose the appropriate forecasting method for a large number of data series. Model selection aims to identify the best method of forecasting for an individual series within the data set. Various selection rules have been proposed in order to enhance forecasting accuracy. In theory, model selection is appealing, as no single extrapolation method is better than all others for all series in an organizational data set. However, empirical results have demonstrated limited effectiveness of these often complex rules. The current study explores the circumstances under which model selection is beneficial. Three measures are examined for characterising the data series, namely predictability (in terms of the relative performance of the random walk but also a method, theta, that performs well), trend and seasonality in the series. In addition, the attributes of the data set and the methods also affect selection performance, including the size of the pools of methods under consideration, the stability of methods’ performance and the correlation between methods. In order to assess the efficacy of model selection in the cases considered, simple selection rules are proposed, based on within-sample best fit or best forecasting performance for different forecast horizons. Individual (per series) selection is contrasted against the simpler approach (aggregate selection), where one method is applied to all data series. Moreover, simple combination of methods also provides an operational benchmark. The analysis shows that individual selection works best when specific sub-populations of data are considered (trended or seasonal series), but also when methods’ relative performance is stable over time or no method is dominant across the data series.

M3 - Working paper

SP - 1

EP - 32

BT - An evaluation of simple forecasting model selection rules

PB - The Department of Management Science

CY - Lancaster

ER -