Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSN › Conference contribution/Paper › peer-review
Publication date | 2010 |
---|---|
Host publication | The 2010 International Joint Conference on Neural Networks (IJCNN) |
Place of Publication | New York |
Publisher | IEEE |
Pages | - |
Number of pages | 8 |
ISBN (print) | 978-1-4244-6917-8 |
<mark>Original language</mark> | English |
In recent forecasting competitions, algorithms of Support Vector Regression (SVR) and Neural Networks (NN) have provided some of the most accurate time series predictions, but also some of the least accurate contenders failing to outperform even simple statistical benchmark methods. As both SVR and NN offer substantial degrees of freedom in model building (e.g. selecting input variables, kernel or activation functions, etc.), a myriad of heuristics and ad-hoc rules have emerged which may lead to different models with substantial differences in performance. The heterogeneity of results impairs our ability to compare the adequacy of a class of algorithms for a given dataset, and fails to develop an understanding of their presumed nonlinear and non-parametric capabilities. In order to determine a generalized estimate of performance for both SVR and NN in the absence of an accepted 'best practice' methodology, this paper seeks to compute benchmark results employing a naive methodology which attempts to mimic many of the common mistakes in model building. The naive methodologies serve primarily as a lower error bound, representative of a within class benchmark for both algorithms in predicting the 66 time series of the NNGC Competition. In addition, their discussion aims to draw attention to the most common mistakes in modelling that regularly lead to model misspecification of MLPs and SVRs in time series forecasting.