1.22 MB, PDF document
Research output: Contribution to conference - Without ISBN/ISSN › Conference paper › peer-review
Fast feedforward non-parametric deep learning network with automatic feature extraction. / Angelov, Plamen Parvanov; Gu, Xiaowei; Principe, Jose .
2017. 534-541 Paper presented at International Joint Conference on Neural Networks (IJCNN).Research output: Contribution to conference - Without ISBN/ISSN › Conference paper › peer-review
}
TY - CONF
T1 - Fast feedforward non-parametric deep learning network with automatic feature extraction
AU - Angelov, Plamen Parvanov
AU - Gu, Xiaowei
AU - Principe, Jose
PY - 2017/5/14
Y1 - 2017/5/14
N2 - In this paper, a new type of feedforward non-parametric deep learning network with automatic feature extraction is proposed. The proposed network is based on human-understandable local aggregations extracted directly from the images. There is no need for any feature selection and parameter tuning. The proposed network involves nonlinear transformation, segmentation operations to select the most distinctive features from the training images and builds RBF neurons based on them to perform classification with no weights to train. The design of the proposed network is very efficient (computation and time wise) and produces highly accurate classification results. Moreover, the training process is parallelizable, and the time consumption can be further reduced with more processors involved. Numerical examples demonstrate the high performance and very short training process of the proposed network for different applications.
AB - In this paper, a new type of feedforward non-parametric deep learning network with automatic feature extraction is proposed. The proposed network is based on human-understandable local aggregations extracted directly from the images. There is no need for any feature selection and parameter tuning. The proposed network involves nonlinear transformation, segmentation operations to select the most distinctive features from the training images and builds RBF neurons based on them to perform classification with no weights to train. The design of the proposed network is very efficient (computation and time wise) and produces highly accurate classification results. Moreover, the training process is parallelizable, and the time consumption can be further reduced with more processors involved. Numerical examples demonstrate the high performance and very short training process of the proposed network for different applications.
M3 - Conference paper
SP - 534
EP - 541
T2 - International Joint Conference on Neural Networks (IJCNN)
Y2 - 14 May 2017 through 19 May 2017
ER -