Home > Research > Publications & Outputs > Dual-Tuning

Links

Text available via DOI:

View graph of relations

Dual-Tuning: Joint Prototype Transfer and Structure Regularization for Compatible Feature Learning

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Dual-Tuning: Joint Prototype Transfer and Structure Regularization for Compatible Feature Learning. / Bai, Yan; Jiao, Jile; Lou, Yihang et al.
In: IEEE Transactions on Multimedia, Vol. 25, 31.12.2023, p. 7287-7298.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Bai, Y, Jiao, J, Lou, Y, Wu, S, Liu, J, Feng, X & Duan, L-Y 2023, 'Dual-Tuning: Joint Prototype Transfer and Structure Regularization for Compatible Feature Learning', IEEE Transactions on Multimedia, vol. 25, pp. 7287-7298. https://doi.org/10.1109/TMM.2022.3219680

APA

Bai, Y., Jiao, J., Lou, Y., Wu, S., Liu, J., Feng, X., & Duan, L.-Y. (2023). Dual-Tuning: Joint Prototype Transfer and Structure Regularization for Compatible Feature Learning. IEEE Transactions on Multimedia, 25, 7287-7298. https://doi.org/10.1109/TMM.2022.3219680

Vancouver

Bai Y, Jiao J, Lou Y, Wu S, Liu J, Feng X et al. Dual-Tuning: Joint Prototype Transfer and Structure Regularization for Compatible Feature Learning. IEEE Transactions on Multimedia. 2023 Dec 31;25:7287-7298. Epub 2022 Nov 4. doi: 10.1109/TMM.2022.3219680

Author

Bai, Yan ; Jiao, Jile ; Lou, Yihang et al. / Dual-Tuning : Joint Prototype Transfer and Structure Regularization for Compatible Feature Learning. In: IEEE Transactions on Multimedia. 2023 ; Vol. 25. pp. 7287-7298.

Bibtex

@article{9932cb65780143e7a4469f1c4ed579b1,
title = "Dual-Tuning: Joint Prototype Transfer and Structure Regularization for Compatible Feature Learning",
abstract = "Visual retrieval system faces frequent model update and deployment. It is a heavy workload to re-extract features of the whole database every time. Feature compatibility enables the learned new visual features to be directly compared with the old features stored in the database. In this way, when updating the deployed model, we can bypass the inflexible and time-consuming feature re-extraction process. However, the old feature space that needs to be compatible is not ideal and faces outlier samples. Besides, the new and old models may be supervised by different losses, which will further causes distribution discrepancy problem between these two feature spaces. In this article, we propose a global optimization Dual-Tuning method to obtain feature compatibility against different networks and losses. A feature-level prototype loss is proposed to explicitly align two types of embedding features, by transferring global prototype information. Furthermore, we design a component-level mutual structural regularization to implicitly optimize the feature intrinsic structure. Experiments are conducted on six datasets, including person ReID datasets, face recognition datasets, and million-scale ImageNet and Place365. Experimental results demonstrate that our Dual-Tuning is able to obtain feature compatibility without sacrificing performance.",
author = "Yan Bai and Jile Jiao and Yihang Lou and Shengsen Wu and Jun Liu and Xuetao Feng and Ling-Yu Duan",
year = "2023",
month = dec,
day = "31",
doi = "10.1109/TMM.2022.3219680",
language = "English",
volume = "25",
pages = "7287--7298",
journal = "IEEE Transactions on Multimedia",
issn = "1520-9210",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

RIS

TY - JOUR

T1 - Dual-Tuning

T2 - Joint Prototype Transfer and Structure Regularization for Compatible Feature Learning

AU - Bai, Yan

AU - Jiao, Jile

AU - Lou, Yihang

AU - Wu, Shengsen

AU - Liu, Jun

AU - Feng, Xuetao

AU - Duan, Ling-Yu

PY - 2023/12/31

Y1 - 2023/12/31

N2 - Visual retrieval system faces frequent model update and deployment. It is a heavy workload to re-extract features of the whole database every time. Feature compatibility enables the learned new visual features to be directly compared with the old features stored in the database. In this way, when updating the deployed model, we can bypass the inflexible and time-consuming feature re-extraction process. However, the old feature space that needs to be compatible is not ideal and faces outlier samples. Besides, the new and old models may be supervised by different losses, which will further causes distribution discrepancy problem between these two feature spaces. In this article, we propose a global optimization Dual-Tuning method to obtain feature compatibility against different networks and losses. A feature-level prototype loss is proposed to explicitly align two types of embedding features, by transferring global prototype information. Furthermore, we design a component-level mutual structural regularization to implicitly optimize the feature intrinsic structure. Experiments are conducted on six datasets, including person ReID datasets, face recognition datasets, and million-scale ImageNet and Place365. Experimental results demonstrate that our Dual-Tuning is able to obtain feature compatibility without sacrificing performance.

AB - Visual retrieval system faces frequent model update and deployment. It is a heavy workload to re-extract features of the whole database every time. Feature compatibility enables the learned new visual features to be directly compared with the old features stored in the database. In this way, when updating the deployed model, we can bypass the inflexible and time-consuming feature re-extraction process. However, the old feature space that needs to be compatible is not ideal and faces outlier samples. Besides, the new and old models may be supervised by different losses, which will further causes distribution discrepancy problem between these two feature spaces. In this article, we propose a global optimization Dual-Tuning method to obtain feature compatibility against different networks and losses. A feature-level prototype loss is proposed to explicitly align two types of embedding features, by transferring global prototype information. Furthermore, we design a component-level mutual structural regularization to implicitly optimize the feature intrinsic structure. Experiments are conducted on six datasets, including person ReID datasets, face recognition datasets, and million-scale ImageNet and Place365. Experimental results demonstrate that our Dual-Tuning is able to obtain feature compatibility without sacrificing performance.

U2 - 10.1109/TMM.2022.3219680

DO - 10.1109/TMM.2022.3219680

M3 - Journal article

VL - 25

SP - 7287

EP - 7298

JO - IEEE Transactions on Multimedia

JF - IEEE Transactions on Multimedia

SN - 1520-9210

ER -