Final published version
Licence: Other
Research output: Contribution to Journal/Magazine › Journal article › peer-review
Research output: Contribution to Journal/Magazine › Journal article › peer-review
}
TY - JOUR
T1 - Identity adaptation for person re-identification
AU - Ke, Qiuhong
AU - Bennamoun, Mohammed
AU - Rahmani, Hossein
AU - An, Senjian
AU - Sohel, Ferdous
AU - Boussaid, Farid
PY - 2018/8/31
Y1 - 2018/8/31
N2 - Person re-identification (re-ID), which aims to identify the same individual from a gallerycollected with different cameras, has attracted increasing attention in the multimedia retrieval community.Current deep learning methods for person re-ID focus on learning classification models on training identities to obtain an ID-discriminative embedding (IDE) extractor, which is used to extract features from testing images for re-ID. The IDE features of the testing identities might not be discriminative due to that the training identities are different from the testing identities. In this paper, we introduce a new ID-adaptation network (ID-AdaptNet), which aims to improve the discriminative power of the IDE features of the testing identities for better person re-ID. The main idea of the ID-AdaptNet is to transform the IDE features to acommon discriminative latent space, where the representations of the ‘‘seen’’ training identities are enforcedto adapt to those of the ‘‘unseen’’ training identities. More specifically, the ID-AdaptNet is trained bysimultaneously minimizing the classification cross-entropy and the discrepancy between the ‘‘seen’’ and the‘‘unseen’’ training identities in the hidden space. To calculate the discrepancy, we represent their probabilitydistributions as moment sequences and calculate their distance using their central moments. We furtherpropose a stacking ID-AdaptNet that jointly trains multiple ID-AdaptNets with a regularization methodfor better re-ID. Experiments show that the ID-AdaptNet and stacking ID-AdaptNet effectively improve thediscriminative power of IDE features.
AB - Person re-identification (re-ID), which aims to identify the same individual from a gallerycollected with different cameras, has attracted increasing attention in the multimedia retrieval community.Current deep learning methods for person re-ID focus on learning classification models on training identities to obtain an ID-discriminative embedding (IDE) extractor, which is used to extract features from testing images for re-ID. The IDE features of the testing identities might not be discriminative due to that the training identities are different from the testing identities. In this paper, we introduce a new ID-adaptation network (ID-AdaptNet), which aims to improve the discriminative power of the IDE features of the testing identities for better person re-ID. The main idea of the ID-AdaptNet is to transform the IDE features to acommon discriminative latent space, where the representations of the ‘‘seen’’ training identities are enforcedto adapt to those of the ‘‘unseen’’ training identities. More specifically, the ID-AdaptNet is trained bysimultaneously minimizing the classification cross-entropy and the discrepancy between the ‘‘seen’’ and the‘‘unseen’’ training identities in the hidden space. To calculate the discrepancy, we represent their probabilitydistributions as moment sequences and calculate their distance using their central moments. We furtherpropose a stacking ID-AdaptNet that jointly trains multiple ID-AdaptNets with a regularization methodfor better re-ID. Experiments show that the ID-AdaptNet and stacking ID-AdaptNet effectively improve thediscriminative power of IDE features.
U2 - 10.1109/ACCESS.2018.2867898
DO - 10.1109/ACCESS.2018.2867898
M3 - Journal article
VL - 6
SP - 48147
EP - 48145
JO - IEEE Access
JF - IEEE Access
SN - 2169-3536
ER -