Home > Research > Publications & Outputs > DL-Reg

Electronic data

Keywords

View graph of relations

DL-Reg: A Deep Learning Regularization Technique using Linear Regression

Research output: Contribution to Journal/MagazineJournal article

Published
Close
<mark>Journal publication date</mark>31/10/2020
<mark>Journal</mark>arXiv
Publication StatusPublished
<mark>Original language</mark>English

Abstract

Regularization plays a vital role in the context of deep learning by preventing deep neural networks from the danger of overfitting. This paper proposes a novel deep learning regularization method named as DL-Reg, which carefully reduces the nonlinearity of deep networks to a certain extent by explicitly enforcing the network to behave as much linear as possible. The key idea is to add a linear constraint to the objective function of the deep neural networks, which is simply the error of a linear mapping from the inputs to the outputs of the model. More precisely, the proposed DL-Reg carefully forces the network to behave in a linear manner. This linear constraint, which is further adjusted by a regularization factor, prevents the network from the risk of overfitting. The performance of DL-Reg is evaluated by training state-of-the-art deep network models on several benchmark datasets. The experimental results show that the proposed regularization method: 1) gives major improvements over the existing regularization techniques, and 2) significantly improves the performance of deep neural networks, especially in the case of small-sized training datasets.