Home > Research > Publications & Outputs > Taylor convolutional networks for image classif...

Links

Text available via DOI:

View graph of relations

Taylor convolutional networks for image classification

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Taylor convolutional networks for image classification. / Wang, X.; Li, C.; Mou, Y.; Zhang, B.; Han, J.; Liu, J.

2019 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2019. p. 1271-1279 8658713.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Wang, X, Li, C, Mou, Y, Zhang, B, Han, J & Liu, J 2019, Taylor convolutional networks for image classification. in 2019 IEEE Winter Conference on Applications of Computer Vision (WACV)., 8658713, IEEE, pp. 1271-1279. https://doi.org/10.1109/WACV.2019.00140

APA

Wang, X., Li, C., Mou, Y., Zhang, B., Han, J., & Liu, J. (2019). Taylor convolutional networks for image classification. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV) (pp. 1271-1279). [8658713] IEEE. https://doi.org/10.1109/WACV.2019.00140

Vancouver

Wang X, Li C, Mou Y, Zhang B, Han J, Liu J. Taylor convolutional networks for image classification. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE. 2019. p. 1271-1279. 8658713 https://doi.org/10.1109/WACV.2019.00140

Author

Wang, X. ; Li, C. ; Mou, Y. ; Zhang, B. ; Han, J. ; Liu, J. / Taylor convolutional networks for image classification. 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2019. pp. 1271-1279

Bibtex

@inproceedings{3ed4b71af8064aedaf233cd33426ade1,
title = "Taylor convolutional networks for image classification",
abstract = "This paper provides a new perspective to understand CNNs based on the Taylor expansion, leading to new Taylor Convolutional Networks (TaylorNets) for image classification. We introduce a principled combination of the high frequency information (i.e., detailed infonnation) and low frequency information in the end-to-end TaylorNets, based on a nonlinear combination ofthe convolutionalfea-ture maps. The steerable module developed in TaylorNets is generic, which can be easily integrated into well-known deep architectures and learned within the same pipeline of the backpropagation algorithm, yielding a higher representation capacity for CNNs. Extensive experimental results demonstrate the super capability of our TaylorNets which improve widely used CNNs architectures, such as conventional CNNs and ResNet, in terms of object classification accuracy on well-known benchmarks. The code will be publicly available.",
keywords = "Backpropagation algorithms, Computer vision, Convolution, Network architecture, Convolutional networks, Deep architectures, End to end, High-frequency informations, Low-frequency, Nonlinear combination, Object classification, Taylor expansions, Image classification",
author = "X. Wang and C. Li and Y. Mou and B. Zhang and J. Han and J. Liu",
note = "Export Date: 11 April 2019 Correspondence Address: Zhang, B.; Automation Science and Electrical Engineering, Beihang UniversityChina; email: bczhang@buaa.edu.cn Funding details: Shenzhen Peacock Plan, KQTD2016112515134654 Funding details: National Natural Science Foundation of China, 61601466, 61672079, 61473086 Funding text 1: The work was supported by the Natural Science Foundation of China under Contract 61601466, 61672079 and 61473086, and Shenzhen Peacock Plan KQTD2016112515134654. This work is supported by the Open Projects Program of National Laboratory of Pattern Recognition. Baochang Zhang is also with Shenzhen Academy of Aerospace Technology, Shenzhen, China. Baochang Zhang is the corresponding author.",
year = "2019",
month = jan,
day = "7",
doi = "10.1109/WACV.2019.00140",
language = "English",
pages = "1271--1279",
booktitle = "2019 IEEE Winter Conference on Applications of Computer Vision (WACV)",
publisher = "IEEE",

}

RIS

TY - GEN

T1 - Taylor convolutional networks for image classification

AU - Wang, X.

AU - Li, C.

AU - Mou, Y.

AU - Zhang, B.

AU - Han, J.

AU - Liu, J.

N1 - Export Date: 11 April 2019 Correspondence Address: Zhang, B.; Automation Science and Electrical Engineering, Beihang UniversityChina; email: bczhang@buaa.edu.cn Funding details: Shenzhen Peacock Plan, KQTD2016112515134654 Funding details: National Natural Science Foundation of China, 61601466, 61672079, 61473086 Funding text 1: The work was supported by the Natural Science Foundation of China under Contract 61601466, 61672079 and 61473086, and Shenzhen Peacock Plan KQTD2016112515134654. This work is supported by the Open Projects Program of National Laboratory of Pattern Recognition. Baochang Zhang is also with Shenzhen Academy of Aerospace Technology, Shenzhen, China. Baochang Zhang is the corresponding author.

PY - 2019/1/7

Y1 - 2019/1/7

N2 - This paper provides a new perspective to understand CNNs based on the Taylor expansion, leading to new Taylor Convolutional Networks (TaylorNets) for image classification. We introduce a principled combination of the high frequency information (i.e., detailed infonnation) and low frequency information in the end-to-end TaylorNets, based on a nonlinear combination ofthe convolutionalfea-ture maps. The steerable module developed in TaylorNets is generic, which can be easily integrated into well-known deep architectures and learned within the same pipeline of the backpropagation algorithm, yielding a higher representation capacity for CNNs. Extensive experimental results demonstrate the super capability of our TaylorNets which improve widely used CNNs architectures, such as conventional CNNs and ResNet, in terms of object classification accuracy on well-known benchmarks. The code will be publicly available.

AB - This paper provides a new perspective to understand CNNs based on the Taylor expansion, leading to new Taylor Convolutional Networks (TaylorNets) for image classification. We introduce a principled combination of the high frequency information (i.e., detailed infonnation) and low frequency information in the end-to-end TaylorNets, based on a nonlinear combination ofthe convolutionalfea-ture maps. The steerable module developed in TaylorNets is generic, which can be easily integrated into well-known deep architectures and learned within the same pipeline of the backpropagation algorithm, yielding a higher representation capacity for CNNs. Extensive experimental results demonstrate the super capability of our TaylorNets which improve widely used CNNs architectures, such as conventional CNNs and ResNet, in terms of object classification accuracy on well-known benchmarks. The code will be publicly available.

KW - Backpropagation algorithms

KW - Computer vision

KW - Convolution

KW - Network architecture

KW - Convolutional networks

KW - Deep architectures

KW - End to end

KW - High-frequency informations

KW - Low-frequency

KW - Nonlinear combination

KW - Object classification

KW - Taylor expansions

KW - Image classification

U2 - 10.1109/WACV.2019.00140

DO - 10.1109/WACV.2019.00140

M3 - Conference contribution/Paper

SP - 1271

EP - 1279

BT - 2019 IEEE Winter Conference on Applications of Computer Vision (WACV)

PB - IEEE

ER -