Home > Research > Publications & Outputs > DL-Reg

Electronic data

Keywords

View graph of relations

DL-Reg: A Deep Learning Regularization Technique using Linear Regression

Research output: Contribution to journalJournal article

Published

Standard

DL-Reg : A Deep Learning Regularization Technique using Linear Regression. / Dialameh, Maryam; Hamzeh, Ali; Rahmani, Hossein.

In: arXiv, 31.10.2020.

Research output: Contribution to journalJournal article

Harvard

APA

Vancouver

Author

Bibtex

@article{ae912f8fc11340789b72590cc38a5e7d,
title = "DL-Reg: A Deep Learning Regularization Technique using Linear Regression",
abstract = " Regularization plays a vital role in the context of deep learning by preventing deep neural networks from the danger of overfitting. This paper proposes a novel deep learning regularization method named as DL-Reg, which carefully reduces the nonlinearity of deep networks to a certain extent by explicitly enforcing the network to behave as much linear as possible. The key idea is to add a linear constraint to the objective function of the deep neural networks, which is simply the error of a linear mapping from the inputs to the outputs of the model. More precisely, the proposed DL-Reg carefully forces the network to behave in a linear manner. This linear constraint, which is further adjusted by a regularization factor, prevents the network from the risk of overfitting. The performance of DL-Reg is evaluated by training state-of-the-art deep network models on several benchmark datasets. The experimental results show that the proposed regularization method: 1) gives major improvements over the existing regularization techniques, and 2) significantly improves the performance of deep neural networks, especially in the case of small-sized training datasets. ",
keywords = "cs.LG, cs.AI, cs.CV",
author = "Maryam Dialameh and Ali Hamzeh and Hossein Rahmani",
year = "2020",
month = oct,
day = "31",
language = "English",
journal = "arXiv",
issn = "2331-8422",

}

RIS

TY - JOUR

T1 - DL-Reg

T2 - A Deep Learning Regularization Technique using Linear Regression

AU - Dialameh, Maryam

AU - Hamzeh, Ali

AU - Rahmani, Hossein

PY - 2020/10/31

Y1 - 2020/10/31

N2 - Regularization plays a vital role in the context of deep learning by preventing deep neural networks from the danger of overfitting. This paper proposes a novel deep learning regularization method named as DL-Reg, which carefully reduces the nonlinearity of deep networks to a certain extent by explicitly enforcing the network to behave as much linear as possible. The key idea is to add a linear constraint to the objective function of the deep neural networks, which is simply the error of a linear mapping from the inputs to the outputs of the model. More precisely, the proposed DL-Reg carefully forces the network to behave in a linear manner. This linear constraint, which is further adjusted by a regularization factor, prevents the network from the risk of overfitting. The performance of DL-Reg is evaluated by training state-of-the-art deep network models on several benchmark datasets. The experimental results show that the proposed regularization method: 1) gives major improvements over the existing regularization techniques, and 2) significantly improves the performance of deep neural networks, especially in the case of small-sized training datasets.

AB - Regularization plays a vital role in the context of deep learning by preventing deep neural networks from the danger of overfitting. This paper proposes a novel deep learning regularization method named as DL-Reg, which carefully reduces the nonlinearity of deep networks to a certain extent by explicitly enforcing the network to behave as much linear as possible. The key idea is to add a linear constraint to the objective function of the deep neural networks, which is simply the error of a linear mapping from the inputs to the outputs of the model. More precisely, the proposed DL-Reg carefully forces the network to behave in a linear manner. This linear constraint, which is further adjusted by a regularization factor, prevents the network from the risk of overfitting. The performance of DL-Reg is evaluated by training state-of-the-art deep network models on several benchmark datasets. The experimental results show that the proposed regularization method: 1) gives major improvements over the existing regularization techniques, and 2) significantly improves the performance of deep neural networks, especially in the case of small-sized training datasets.

KW - cs.LG

KW - cs.AI

KW - cs.CV

M3 - Journal article

JO - arXiv

JF - arXiv

SN - 2331-8422

ER -