Home > Research > Publications & Outputs > Taylor convolutional networks for image classif...


Text available via DOI:

View graph of relations

Taylor convolutional networks for image classification

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paper

  • X. Wang
  • C. Li
  • Y. Mou
  • B. Zhang
  • J. Han
  • J. Liu


This paper provides a new perspective to understand CNNs based on the Taylor expansion, leading to new Taylor Convolutional Networks (TaylorNets) for image classification. We introduce a principled combination of the high frequency information (i.e., detailed infonnation) and low frequency information in the end-to-end TaylorNets, based on a nonlinear combination ofthe convolutionalfea-ture maps. The steerable module developed in TaylorNets is generic, which can be easily integrated into well-known deep architectures and learned within the same pipeline of the backpropagation algorithm, yielding a higher representation capacity for CNNs. Extensive experimental results demonstrate the super capability of our TaylorNets which improve widely used CNNs architectures, such as conventional CNNs and ResNet, in terms of object classification accuracy on well-known benchmarks. The code will be publicly available.

Bibliographic note

Export Date: 11 April 2019 Correspondence Address: Zhang, B.; Automation Science and Electrical Engineering, Beihang UniversityChina; email: bczhang@buaa.edu.cn Funding details: Shenzhen Peacock Plan, KQTD2016112515134654 Funding details: National Natural Science Foundation of China, 61601466, 61672079, 61473086 Funding text 1: The work was supported by the Natural Science Foundation of China under Contract 61601466, 61672079 and 61473086, and Shenzhen Peacock Plan KQTD2016112515134654. This work is supported by the Open Projects Program of National Laboratory of Pattern Recognition. Baochang Zhang is also with Shenzhen Academy of Aerospace Technology, Shenzhen, China. Baochang Zhang is the corresponding author.