Home > Research > Publications & Outputs > Not Every Patch is Needed

Electronic data

  • paper

    Accepted author manuscript, 1.32 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

Text available via DOI:

View graph of relations

Not Every Patch is Needed: Toward a More Efficient and Effective Backbone for Video-Based Person Re-Identification

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Not Every Patch is Needed: Toward a More Efficient and Effective Backbone for Video-Based Person Re-Identification. / Zhu, Lanyun; Chen, Tianrun; Ji, Deyi et al.
In: IEEE Transactions on Image Processing, Vol. 34, 31.01.2025, p. 785-800.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Zhu L, Chen T, Ji D, Ye J, Liu J. Not Every Patch is Needed: Toward a More Efficient and Effective Backbone for Video-Based Person Re-Identification. IEEE Transactions on Image Processing. 2025 Jan 31;34:785-800. Epub 2025 Jan 27. doi: 10.1109/tip.2025.3531299

Author

Zhu, Lanyun ; Chen, Tianrun ; Ji, Deyi et al. / Not Every Patch is Needed : Toward a More Efficient and Effective Backbone for Video-Based Person Re-Identification. In: IEEE Transactions on Image Processing. 2025 ; Vol. 34. pp. 785-800.

Bibtex

@article{f7334e5e24454b14a8a01a685aa94682,
title = "Not Every Patch is Needed: Toward a More Efficient and Effective Backbone for Video-Based Person Re-Identification",
abstract = "This paper proposes a new effective and efficient plug-and-play backbone for video-based person re-identification (ReID). Conventional video-based ReID methods typically use CNN or transformer backbones to extract deep features for every position in every sampled video frame. Here, we argue that this exhaustive feature extraction could be unnecessary, since we find that different frames in a ReID video often exhibit small differences and contain many similar regions due to the relatively slight movements of human beings. Inspired by this, a more selective, efficient paradigm is explored in this paper. Specifically, we introduce a patch selection mechanism to reduce computational cost by choosing only the crucial and non-repetitive patches for feature extraction. Additionally, we present a novel network structure that generates and utilizes pseudo frame global context to address the issue of incomplete views resulting from sparse inputs. By incorporating these new designs, our backbone can achieve both high performance and low computational cost. Extensive experiments on multiple datasets show that our approach reduces the computational cost by 74% compared to ViT-B and 28% compared to ResNet50, while the accuracy is on par with ViT-B and outperforms ResNet50 significantly.",
author = "Lanyun Zhu and Tianrun Chen and Deyi Ji and Jieping Ye and Jun Liu",
year = "2025",
month = jan,
day = "31",
doi = "10.1109/tip.2025.3531299",
language = "English",
volume = "34",
pages = "785--800",
journal = "IEEE Transactions on Image Processing",
issn = "1057-7149",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

RIS

TY - JOUR

T1 - Not Every Patch is Needed

T2 - Toward a More Efficient and Effective Backbone for Video-Based Person Re-Identification

AU - Zhu, Lanyun

AU - Chen, Tianrun

AU - Ji, Deyi

AU - Ye, Jieping

AU - Liu, Jun

PY - 2025/1/31

Y1 - 2025/1/31

N2 - This paper proposes a new effective and efficient plug-and-play backbone for video-based person re-identification (ReID). Conventional video-based ReID methods typically use CNN or transformer backbones to extract deep features for every position in every sampled video frame. Here, we argue that this exhaustive feature extraction could be unnecessary, since we find that different frames in a ReID video often exhibit small differences and contain many similar regions due to the relatively slight movements of human beings. Inspired by this, a more selective, efficient paradigm is explored in this paper. Specifically, we introduce a patch selection mechanism to reduce computational cost by choosing only the crucial and non-repetitive patches for feature extraction. Additionally, we present a novel network structure that generates and utilizes pseudo frame global context to address the issue of incomplete views resulting from sparse inputs. By incorporating these new designs, our backbone can achieve both high performance and low computational cost. Extensive experiments on multiple datasets show that our approach reduces the computational cost by 74% compared to ViT-B and 28% compared to ResNet50, while the accuracy is on par with ViT-B and outperforms ResNet50 significantly.

AB - This paper proposes a new effective and efficient plug-and-play backbone for video-based person re-identification (ReID). Conventional video-based ReID methods typically use CNN or transformer backbones to extract deep features for every position in every sampled video frame. Here, we argue that this exhaustive feature extraction could be unnecessary, since we find that different frames in a ReID video often exhibit small differences and contain many similar regions due to the relatively slight movements of human beings. Inspired by this, a more selective, efficient paradigm is explored in this paper. Specifically, we introduce a patch selection mechanism to reduce computational cost by choosing only the crucial and non-repetitive patches for feature extraction. Additionally, we present a novel network structure that generates and utilizes pseudo frame global context to address the issue of incomplete views resulting from sparse inputs. By incorporating these new designs, our backbone can achieve both high performance and low computational cost. Extensive experiments on multiple datasets show that our approach reduces the computational cost by 74% compared to ViT-B and 28% compared to ResNet50, while the accuracy is on par with ViT-B and outperforms ResNet50 significantly.

U2 - 10.1109/tip.2025.3531299

DO - 10.1109/tip.2025.3531299

M3 - Journal article

VL - 34

SP - 785

EP - 800

JO - IEEE Transactions on Image Processing

JF - IEEE Transactions on Image Processing

SN - 1057-7149

ER -