Home > Research > Publications & Outputs > Latent Constrained Correlation Filter

Associated organisational unit

Electronic data

  • FINAL VERSION

    Rights statement: ©2017 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Accepted author manuscript, 8.51 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

View graph of relations

Latent Constrained Correlation Filter

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
  • Baochang Zhang
  • Shangzhen Luan
  • Chen Chen
  • Jungong Han
  • Wei Wang
  • Alessandro Perina
  • Ling Shao
Close
<mark>Journal publication date</mark>03/2018
<mark>Journal</mark>IEEE Transactions on Image Processing
Issue number3
Volume27
Number of pages11
Pages (from-to)1038-1048
Publication StatusPublished
Early online date17/11/17
<mark>Original language</mark>English

Abstract

Correlation filters are special classifiers designed for shift-invariant object recognition, which are robust to pattern distortions. The recent literature shows that combining a set of sub-filters trained based on a single or a small group of images obtains the best performance. The idea is equivalent to estimating variable distribution based on the data sampling (bagging), which can be interpreted as finding solutions (variable distribution approximation) directly from sampled data space. However, this methodology fails to account for the variations existed in the data. In this paper, we introduce an intermediate step—solution sampling—after the data sampling step to form a subspace, in which an optimal solution can be estimated. More specifically, we propose a new method, named latent constrained correlation filters (LCCF), by mapping the correlation filters to a given latent subspace, and develop a new learning framework in the latent subspace that embeds distribution-related constraints into the original problem. To solve the optimization problem, we introduce a subspace-based alternating direction method of multipliers, which is proven to converge at the saddle point. Our approach is successfully applied to three different tasks, including eye localization, car detection, and object tracking. Extensive experiments demonstrate that LCCF outperforms the state-of-the-art methods.1 1
The source code will be publicly available. https://github.com/bczhangbczhang/

Bibliographic note

©2017 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.