Home > Research > Publications & Outputs > Gabor Convolutional Networks

Electronic data

  • Gabor Convolutional Networks

    Rights statement: ©2018 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Accepted author manuscript, 1.07 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Gabor Convolutional Networks

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
  • Shangzhen Luan
  • Chen Chen
  • Baochang Zhang
  • Jungong Han
  • Jianzhuang Liu
Close
<mark>Journal publication date</mark>09/2018
<mark>Journal</mark>IEEE Transactions on Image Processing
Issue number9
Volume27
Number of pages10
Pages (from-to)4357-4366
Publication StatusPublished
Early online date10/05/18
<mark>Original language</mark>English

Abstract

In steerable filters, a filter of arbitrary orientation can be generated by a linear combination of a set of “basis filters”. Steerable properties dominate the design of the traditional filters e.g., Gabor filters and endow features the capability of handling spatial transformations. However, such properties have not yet been well explored in the deep convolutional neural networks (DCNNs). In this paper, we develop a new deep model, namely Gabor Convolutional Networks (GCNs or Gabor
CNNs), with Gabor filters incorporated into DCNNs such that the robustness of learned features against the orientation and scale changes can be reinforced. By manipulating the basic element of DCNNs, i.e., the convolution operator, based on Gabor filters, GCNs can be easily implemented and are readily compatible with any popular deep learning architecture. We carry out extensive experiments to demonstrate the promising performance of our GCNs framework and the results show its superiority in recognizing objects, especially when the scale and rotation changes take place frequently. Moreover, the proposed GCNs have much fewer network parameters to be learned and can effectively reduce the training complexity of the network, leading to a more compact deep learning model while still maintaining a high feature representation capacity. The source code can be found at https://github.com/bczhangbczhang .

Bibliographic note

©2018 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.