Home > Research > Publications & Outputs > Attribute-Guided Network for Cross-Modal Zero-S...

Electronic data

  • AgNet

    Rights statement: ©2019 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Accepted author manuscript, 4.19 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Attribute-Guided Network for Cross-Modal Zero-Shot Hashing

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Attribute-Guided Network for Cross-Modal Zero-Shot Hashing. / Ji, Zhong; Sun, Yuxin; Yu, Yunlong et al.
In: IEEE Transactions on Neural Networks and Learning Systems, Vol. 31, No. 1, 01.01.2020, p. 321-330.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Ji, Z, Sun, Y, Yu, Y, Pang, Y & Han, J 2020, 'Attribute-Guided Network for Cross-Modal Zero-Shot Hashing', IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 1, pp. 321-330. https://doi.org/10.1109/TNNLS.2019.2904991

APA

Ji, Z., Sun, Y., Yu, Y., Pang, Y., & Han, J. (2020). Attribute-Guided Network for Cross-Modal Zero-Shot Hashing. IEEE Transactions on Neural Networks and Learning Systems, 31(1), 321-330. https://doi.org/10.1109/TNNLS.2019.2904991

Vancouver

Ji Z, Sun Y, Yu Y, Pang Y, Han J. Attribute-Guided Network for Cross-Modal Zero-Shot Hashing. IEEE Transactions on Neural Networks and Learning Systems. 2020 Jan 1;31(1):321-330. Epub 2019 Apr 11. doi: 10.1109/TNNLS.2019.2904991

Author

Ji, Zhong ; Sun, Yuxin ; Yu, Yunlong et al. / Attribute-Guided Network for Cross-Modal Zero-Shot Hashing. In: IEEE Transactions on Neural Networks and Learning Systems. 2020 ; Vol. 31, No. 1. pp. 321-330.

Bibtex

@article{a084f132fdbd43149010e5d29db0aec5,
title = "Attribute-Guided Network for Cross-Modal Zero-Shot Hashing",
abstract = "Zero-shot hashing (ZSH) aims at learning a hashing model that is trained only by instances from seen categories but can generate well to those of unseen categories. Typically, it is achieved by utilizing a semantic embedding space to transfer knowledge from seen domain to unseen domain. Existing efforts mainly focus on single-modal retrieval task, especially image-based image retrieval (IBIR). However, as a highlighted research topic in the field of hashing, cross-modal retrieval is more common in real-world applications. To address the cross-modal ZSH (CMZSH) retrieval task, we propose a novel attribute-guided network (AgNet), which can perform not only IBIR but also text-based image retrieval (TBIR). In particular, AgNet aligns different modal data into a semantically rich attribute space, which bridges the gap caused by modality heterogeneity and zero-shot setting. We also design an effective strategy that exploits the attribute to guide the generation of hash codes for image and text within the same network. Extensive experimental results on three benchmark data sets (AwA, SUN, and ImageNet) demonstrate the superiority of AgNet on both cross-modal and single-modal zero-shot image retrieval tasks.",
author = "Zhong Ji and Yuxin Sun and Yunlong Yu and Yanwei Pang and Jungong Han",
note = "{\textcopyright}2019 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.",
year = "2020",
month = jan,
day = "1",
doi = "10.1109/TNNLS.2019.2904991",
language = "English",
volume = "31",
pages = "321--330",
journal = "IEEE Transactions on Neural Networks and Learning Systems",
issn = "2162-237X",
publisher = "IEEE Computational Intelligence Society",
number = "1",

}

RIS

TY - JOUR

T1 - Attribute-Guided Network for Cross-Modal Zero-Shot Hashing

AU - Ji, Zhong

AU - Sun, Yuxin

AU - Yu, Yunlong

AU - Pang, Yanwei

AU - Han, Jungong

N1 - ©2019 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

PY - 2020/1/1

Y1 - 2020/1/1

N2 - Zero-shot hashing (ZSH) aims at learning a hashing model that is trained only by instances from seen categories but can generate well to those of unseen categories. Typically, it is achieved by utilizing a semantic embedding space to transfer knowledge from seen domain to unseen domain. Existing efforts mainly focus on single-modal retrieval task, especially image-based image retrieval (IBIR). However, as a highlighted research topic in the field of hashing, cross-modal retrieval is more common in real-world applications. To address the cross-modal ZSH (CMZSH) retrieval task, we propose a novel attribute-guided network (AgNet), which can perform not only IBIR but also text-based image retrieval (TBIR). In particular, AgNet aligns different modal data into a semantically rich attribute space, which bridges the gap caused by modality heterogeneity and zero-shot setting. We also design an effective strategy that exploits the attribute to guide the generation of hash codes for image and text within the same network. Extensive experimental results on three benchmark data sets (AwA, SUN, and ImageNet) demonstrate the superiority of AgNet on both cross-modal and single-modal zero-shot image retrieval tasks.

AB - Zero-shot hashing (ZSH) aims at learning a hashing model that is trained only by instances from seen categories but can generate well to those of unseen categories. Typically, it is achieved by utilizing a semantic embedding space to transfer knowledge from seen domain to unseen domain. Existing efforts mainly focus on single-modal retrieval task, especially image-based image retrieval (IBIR). However, as a highlighted research topic in the field of hashing, cross-modal retrieval is more common in real-world applications. To address the cross-modal ZSH (CMZSH) retrieval task, we propose a novel attribute-guided network (AgNet), which can perform not only IBIR but also text-based image retrieval (TBIR). In particular, AgNet aligns different modal data into a semantically rich attribute space, which bridges the gap caused by modality heterogeneity and zero-shot setting. We also design an effective strategy that exploits the attribute to guide the generation of hash codes for image and text within the same network. Extensive experimental results on three benchmark data sets (AwA, SUN, and ImageNet) demonstrate the superiority of AgNet on both cross-modal and single-modal zero-shot image retrieval tasks.

U2 - 10.1109/TNNLS.2019.2904991

DO - 10.1109/TNNLS.2019.2904991

M3 - Journal article

VL - 31

SP - 321

EP - 330

JO - IEEE Transactions on Neural Networks and Learning Systems

JF - IEEE Transactions on Neural Networks and Learning Systems

SN - 2162-237X

IS - 1

ER -