Home > Research > Publications & Outputs > Not Every Patch is Needed

Electronic data

  • paper

    Accepted author manuscript, 1.32 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

Text available via DOI:

View graph of relations

Not Every Patch is Needed: Toward a More Efficient and Effective Backbone for Video-Based Person Re-Identification

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
  • Lanyun Zhu
  • Tianrun Chen
  • Deyi Ji
  • Jieping Ye
  • Jun Liu
Close
<mark>Journal publication date</mark>31/01/2025
<mark>Journal</mark>IEEE Transactions on Image Processing
Volume34
Number of pages16
Pages (from-to)785-800
Publication StatusPublished
Early online date27/01/25
<mark>Original language</mark>English

Abstract

This paper proposes a new effective and efficient plug-and-play backbone for video-based person re-identification (ReID). Conventional video-based ReID methods typically use CNN or transformer backbones to extract deep features for every position in every sampled video frame. Here, we argue that this exhaustive feature extraction could be unnecessary, since we find that different frames in a ReID video often exhibit small differences and contain many similar regions due to the relatively slight movements of human beings. Inspired by this, a more selective, efficient paradigm is explored in this paper. Specifically, we introduce a patch selection mechanism to reduce computational cost by choosing only the crucial and non-repetitive patches for feature extraction. Additionally, we present a novel network structure that generates and utilizes pseudo frame global context to address the issue of incomplete views resulting from sparse inputs. By incorporating these new designs, our backbone can achieve both high performance and low computational cost. Extensive experiments on multiple datasets show that our approach reduces the computational cost by 74% compared to ViT-B and 28% compared to ResNet50, while the accuracy is on par with ViT-B and outperforms ResNet50 significantly.