Home > Research > Publications & Outputs > Dual-Tuning

Links

Text available via DOI:

View graph of relations

Dual-Tuning: Joint Prototype Transfer and Structure Regularization for Compatible Feature Learning

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
  • Yan Bai
  • Jile Jiao
  • Yihang Lou
  • Shengsen Wu
  • Jun Liu
  • Xuetao Feng
  • Ling-Yu Duan
Close
<mark>Journal publication date</mark>31/12/2023
<mark>Journal</mark>IEEE Transactions on Multimedia
Volume25
Number of pages13
Pages (from-to)7287-7298
Publication StatusPublished
Early online date4/11/22
<mark>Original language</mark>English

Abstract

Visual retrieval system faces frequent model update and deployment. It is a heavy workload to re-extract features of the whole database every time. Feature compatibility enables the learned new visual features to be directly compared with the old features stored in the database. In this way, when updating the deployed model, we can bypass the inflexible and time-consuming feature re-extraction process. However, the old feature space that needs to be compatible is not ideal and faces outlier samples. Besides, the new and old models may be supervised by different losses, which will further causes distribution discrepancy problem between these two feature spaces. In this article, we propose a global optimization Dual-Tuning method to obtain feature compatibility against different networks and losses. A feature-level prototype loss is proposed to explicitly align two types of embedding features, by transferring global prototype information. Furthermore, we design a component-level mutual structural regularization to implicitly optimize the feature intrinsic structure. Experiments are conducted on six datasets, including person ReID datasets, face recognition datasets, and million-scale ImageNet and Place365. Experimental results demonstrate that our Dual-Tuning is able to obtain feature compatibility without sacrificing performance.