Home > Research > Publications & Outputs > Gabor Convolutional Networks

Electronic data

  • Gabor Convolutional Networks

    Rights statement: ©2018 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Accepted author manuscript, 1.07 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Gabor Convolutional Networks

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Gabor Convolutional Networks. / Luan, Shangzhen; Chen, Chen; Zhang, Baochang et al.
In: IEEE Transactions on Image Processing, Vol. 27, No. 9, 09.2018, p. 4357-4366.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Luan, S, Chen, C, Zhang, B, Han, J & Liu, J 2018, 'Gabor Convolutional Networks', IEEE Transactions on Image Processing, vol. 27, no. 9, pp. 4357-4366. https://doi.org/10.1109/TIP.2018.2835143

APA

Luan, S., Chen, C., Zhang, B., Han, J., & Liu, J. (2018). Gabor Convolutional Networks. IEEE Transactions on Image Processing, 27(9), 4357-4366. https://doi.org/10.1109/TIP.2018.2835143

Vancouver

Luan S, Chen C, Zhang B, Han J, Liu J. Gabor Convolutional Networks. IEEE Transactions on Image Processing. 2018 Sept;27(9):4357-4366. Epub 2018 May 10. doi: 10.1109/TIP.2018.2835143

Author

Luan, Shangzhen ; Chen, Chen ; Zhang, Baochang et al. / Gabor Convolutional Networks. In: IEEE Transactions on Image Processing. 2018 ; Vol. 27, No. 9. pp. 4357-4366.

Bibtex

@article{6d3b51d15b0942f7b17c6befb7d3d62a,
title = "Gabor Convolutional Networks",
abstract = "In steerable filters, a filter of arbitrary orientation can be generated by a linear combination of a set of “basis filters”. Steerable properties dominate the design of the traditional filters e.g., Gabor filters and endow features the capability of handling spatial transformations. However, such properties have not yet been well explored in the deep convolutional neural networks (DCNNs). In this paper, we develop a new deep model, namely Gabor Convolutional Networks (GCNs or GaborCNNs), with Gabor filters incorporated into DCNNs such that the robustness of learned features against the orientation and scale changes can be reinforced. By manipulating the basic element of DCNNs, i.e., the convolution operator, based on Gabor filters, GCNs can be easily implemented and are readily compatible with any popular deep learning architecture. We carry out extensive experiments to demonstrate the promising performance of our GCNs framework and the results show its superiority in recognizing objects, especially when the scale and rotation changes take place frequently. Moreover, the proposed GCNs have much fewer network parameters to be learned and can effectively reduce the training complexity of the network, leading to a more compact deep learning model while still maintaining a high feature representation capacity. The source code can be found at https://github.com/bczhangbczhang .",
author = "Shangzhen Luan and Chen Chen and Baochang Zhang and Jungong Han and Jianzhuang Liu",
note = "{\textcopyright}2018 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.",
year = "2018",
month = sep,
doi = "10.1109/TIP.2018.2835143",
language = "English",
volume = "27",
pages = "4357--4366",
journal = "IEEE Transactions on Image Processing",
issn = "1057-7149",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "9",

}

RIS

TY - JOUR

T1 - Gabor Convolutional Networks

AU - Luan, Shangzhen

AU - Chen, Chen

AU - Zhang, Baochang

AU - Han, Jungong

AU - Liu, Jianzhuang

N1 - ©2018 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

PY - 2018/9

Y1 - 2018/9

N2 - In steerable filters, a filter of arbitrary orientation can be generated by a linear combination of a set of “basis filters”. Steerable properties dominate the design of the traditional filters e.g., Gabor filters and endow features the capability of handling spatial transformations. However, such properties have not yet been well explored in the deep convolutional neural networks (DCNNs). In this paper, we develop a new deep model, namely Gabor Convolutional Networks (GCNs or GaborCNNs), with Gabor filters incorporated into DCNNs such that the robustness of learned features against the orientation and scale changes can be reinforced. By manipulating the basic element of DCNNs, i.e., the convolution operator, based on Gabor filters, GCNs can be easily implemented and are readily compatible with any popular deep learning architecture. We carry out extensive experiments to demonstrate the promising performance of our GCNs framework and the results show its superiority in recognizing objects, especially when the scale and rotation changes take place frequently. Moreover, the proposed GCNs have much fewer network parameters to be learned and can effectively reduce the training complexity of the network, leading to a more compact deep learning model while still maintaining a high feature representation capacity. The source code can be found at https://github.com/bczhangbczhang .

AB - In steerable filters, a filter of arbitrary orientation can be generated by a linear combination of a set of “basis filters”. Steerable properties dominate the design of the traditional filters e.g., Gabor filters and endow features the capability of handling spatial transformations. However, such properties have not yet been well explored in the deep convolutional neural networks (DCNNs). In this paper, we develop a new deep model, namely Gabor Convolutional Networks (GCNs or GaborCNNs), with Gabor filters incorporated into DCNNs such that the robustness of learned features against the orientation and scale changes can be reinforced. By manipulating the basic element of DCNNs, i.e., the convolution operator, based on Gabor filters, GCNs can be easily implemented and are readily compatible with any popular deep learning architecture. We carry out extensive experiments to demonstrate the promising performance of our GCNs framework and the results show its superiority in recognizing objects, especially when the scale and rotation changes take place frequently. Moreover, the proposed GCNs have much fewer network parameters to be learned and can effectively reduce the training complexity of the network, leading to a more compact deep learning model while still maintaining a high feature representation capacity. The source code can be found at https://github.com/bczhangbczhang .

U2 - 10.1109/TIP.2018.2835143

DO - 10.1109/TIP.2018.2835143

M3 - Journal article

VL - 27

SP - 4357

EP - 4366

JO - IEEE Transactions on Image Processing

JF - IEEE Transactions on Image Processing

SN - 1057-7149

IS - 9

ER -