Home > Research > Publications & Outputs > Real-Time Scalable Visual Tracking via Quadrang...

Associated organisational unit

Electronic data

  • ScalableTracking

    Rights statement: ©2018 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Accepted author manuscript, 2.15 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Real-Time Scalable Visual Tracking via Quadrangle Kernelized Correlation Filters

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
  • Guiguang Ding
  • Wenshuo Chen
  • Sicheng Zhao
  • Jungong Han
  • Qiaoyan Liu
Close
<mark>Journal publication date</mark>01/2018
<mark>Journal</mark>IEEE Transactions on Intelligent Transportation Systems
Issue number1
Volume19
Number of pages11
Pages (from-to)140-150
Publication StatusPublished
Early online date7/12/17
<mark>Original language</mark>English

Abstract

Correlation filter (CF) has been widely used in tracking tasks due to its simplicity and high efficiency. However, conventional CF-based trackers fail to handle the scale variation that occurs when the targeted object is moving, which is one of the most notable unsolved problems of visual object tracking. In this paper, we propose a scalable visual tracking algorithm based on kernelized correlation filters, referred to as quadrangle kernelized correlation filters (QKCF). Unlike existing complicated scalable trackers that either perform the correlation filtering operation multiple times or extract many candidate windows at various scales, our tracker intends to estimate the scale of the object based on the positions of its four corners, which can be detected using a new Gaussian training output matrix within one filtering process. After obtaining four peak values corresponding to the four corners, we measure the detection confidence of each part response by evaluating its spatial and temporal smoothness. On top of it, a weighted Bayesian inference framework is employed to estimate the final location and size of the bounding box from the response matrix, where the weights are synchronized with the calculated detection likelihoods. Experiments are performed on the OTB-100 data set and 16 benchmark sequences with significant scale variations. The results demonstrate the superiority of the proposed method in terms of both effectiveness and robustness, compared with the state-of-the-art methods.

Bibliographic note

©2018 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.