Home > Research > Publications & Outputs > Auto-balanced filter pruning for efficient conv...
View graph of relations

Auto-balanced filter pruning for efficient convolutional neural networks

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Close
Publication date27/04/2018
Host publicationTHE THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE THE THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE THE EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE.
PublisherAAAI
Pages6797-6804
Number of pages8
ISBN (electronic)9781577358008
<mark>Original language</mark>English
EventThirty-Second AAAI Conference on Artificial Intelligence - Hilton New Orleans Riverside, New Orleans, United States
Duration: 2/02/20187/02/2018
Conference number: 32nd
https://aaai.org/Conferences/AAAI-18/

Conference

ConferenceThirty-Second AAAI Conference on Artificial Intelligence
Abbreviated titleAAAI-18
Country/TerritoryUnited States
CityNew Orleans
Period2/02/187/02/18
Internet address

Publication series

NameTHE THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE THE THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE THE EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE.
PublisherAAAI
ISSN (electronic)2374-3468

Conference

ConferenceThirty-Second AAAI Conference on Artificial Intelligence
Abbreviated titleAAAI-18
Country/TerritoryUnited States
CityNew Orleans
Period2/02/187/02/18
Internet address

Abstract

In recent years considerable research efforts have been devoted to compression techniques of convolutional neural networks (CNNs). Many works so far have focused on CNN connection pruning methods which produce sparse parameter tensors in convolutional or fully-connected layers. It has been demonstrated in several studies that even simple methods can effectively eliminate connections of a CNN. However, since these methods make parameter tensors just sparser but no smaller, the compression may not transfer directly to acceleration without support from specially designed hardware. In this paper, we propose an iterative approach named Auto-balanced Filter Pruning, where we pre-train the network in an innovative auto-balanced way to transfer the representational capacity of its convolutional layers to a fraction of the filters, prune the redundant ones, then re-train it to restore the accuracy. In this way, a smaller version of the original network is learned and the floating-point operations (FLOPs) are reduced. By applying this method on several common CNNs, we show that a large portion of the filters can be discarded without obvious accuracy drop, leading to significant reduction of computational burdens. Concretely, we reduce the inference cost of LeNet-5 on MNIST, VGG-16 and ResNet-56 on CIFAR-10 by 95.1%, 79.7% and 60.9%, respectively. Copyright © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.