Home > Research > Publications & Outputs > TUCH

Associated organisational unit

Electronic data

  • ijcai2017_submission_xinzhao

    Accepted author manuscript, 1.03 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

TUCH: Turning Cross-view Hashing into Single-view Hashing via Generative Adversarial Nets

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

TUCH: Turning Cross-view Hashing into Single-view Hashing via Generative Adversarial Nets. / Zhao, Xin; Ding, Guiguang; Guo, Yuchen et al.
Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence. ed. / Carles Sierra. Melbourne: IJCAI, 2017. p. 3511-3517.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Zhao, X, Ding, G, Guo, Y, Han, J & Gao, Y 2017, TUCH: Turning Cross-view Hashing into Single-view Hashing via Generative Adversarial Nets. in C Sierra (ed.), Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence. IJCAI, Melbourne, pp. 3511-3517, IJCAI17, 21/08/17. https://doi.org/10.24963/ijcai.2017/491

APA

Zhao, X., Ding, G., Guo, Y., Han, J., & Gao, Y. (2017). TUCH: Turning Cross-view Hashing into Single-view Hashing via Generative Adversarial Nets. In C. Sierra (Ed.), Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (pp. 3511-3517). IJCAI. https://doi.org/10.24963/ijcai.2017/491

Vancouver

Zhao X, Ding G, Guo Y, Han J, Gao Y. TUCH: Turning Cross-view Hashing into Single-view Hashing via Generative Adversarial Nets. In Sierra C, editor, Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence. Melbourne: IJCAI. 2017. p. 3511-3517 doi: 10.24963/ijcai.2017/491

Author

Zhao, Xin ; Ding, Guiguang ; Guo, Yuchen et al. / TUCH : Turning Cross-view Hashing into Single-view Hashing via Generative Adversarial Nets. Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence. editor / Carles Sierra. Melbourne : IJCAI, 2017. pp. 3511-3517

Bibtex

@inproceedings{a90c2b35f51444b0a664495f6be290cd,
title = "TUCH: Turning Cross-view Hashing into Single-view Hashing via Generative Adversarial Nets",
abstract = "Cross-view retrieval, which focuses on searching images as response to text queries or vice versa, has received increasing attention recently. Cross-view hashing is to efficiently solve the cross-view retrieval problem with binary hash codes. Most existing works on cross-view hashing exploit multi-view embedding method to tackle this problem, which inevitably causes the information loss in both image and text domains. Inspired by the Generative Adversarial Nets (GANs), this paper presents a new model that is able to Turn Cross-view Hashing into single-view hashing (TUCH), thus enabling the information of image to be preserved as much as possible. TUCH is a novel deep architecture that integrates a language model network T for text feature extraction, a generator network G to generate fake images from text feature and a hashing network H for learning hashing functions to generate compact binary codes. Our architecture effectively unifies joint generative adversarial learning and cross-view hashing. Extensive empirical evidence shows that our TUCH approach achieves state-of-the-art results, especially on text to image retrieval, based on image-sentences datasets, i.e. standard IAPRTC-12 and large-scale Microsoft COCO.",
author = "Xin Zhao and Guiguang Ding and Yuchen Guo and Jungong Han and Yue Gao",
year = "2017",
month = aug,
day = "19",
doi = "10.24963/ijcai.2017/491",
language = "English",
pages = "3511--3517",
editor = "Carles Sierra",
booktitle = "Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence",
publisher = "IJCAI",
note = "IJCAI17 ; Conference date: 21-08-2017 Through 25-08-2017",

}

RIS

TY - GEN

T1 - TUCH

T2 - IJCAI17

AU - Zhao, Xin

AU - Ding, Guiguang

AU - Guo, Yuchen

AU - Han, Jungong

AU - Gao, Yue

PY - 2017/8/19

Y1 - 2017/8/19

N2 - Cross-view retrieval, which focuses on searching images as response to text queries or vice versa, has received increasing attention recently. Cross-view hashing is to efficiently solve the cross-view retrieval problem with binary hash codes. Most existing works on cross-view hashing exploit multi-view embedding method to tackle this problem, which inevitably causes the information loss in both image and text domains. Inspired by the Generative Adversarial Nets (GANs), this paper presents a new model that is able to Turn Cross-view Hashing into single-view hashing (TUCH), thus enabling the information of image to be preserved as much as possible. TUCH is a novel deep architecture that integrates a language model network T for text feature extraction, a generator network G to generate fake images from text feature and a hashing network H for learning hashing functions to generate compact binary codes. Our architecture effectively unifies joint generative adversarial learning and cross-view hashing. Extensive empirical evidence shows that our TUCH approach achieves state-of-the-art results, especially on text to image retrieval, based on image-sentences datasets, i.e. standard IAPRTC-12 and large-scale Microsoft COCO.

AB - Cross-view retrieval, which focuses on searching images as response to text queries or vice versa, has received increasing attention recently. Cross-view hashing is to efficiently solve the cross-view retrieval problem with binary hash codes. Most existing works on cross-view hashing exploit multi-view embedding method to tackle this problem, which inevitably causes the information loss in both image and text domains. Inspired by the Generative Adversarial Nets (GANs), this paper presents a new model that is able to Turn Cross-view Hashing into single-view hashing (TUCH), thus enabling the information of image to be preserved as much as possible. TUCH is a novel deep architecture that integrates a language model network T for text feature extraction, a generator network G to generate fake images from text feature and a hashing network H for learning hashing functions to generate compact binary codes. Our architecture effectively unifies joint generative adversarial learning and cross-view hashing. Extensive empirical evidence shows that our TUCH approach achieves state-of-the-art results, especially on text to image retrieval, based on image-sentences datasets, i.e. standard IAPRTC-12 and large-scale Microsoft COCO.

U2 - 10.24963/ijcai.2017/491

DO - 10.24963/ijcai.2017/491

M3 - Conference contribution/Paper

SP - 3511

EP - 3517

BT - Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence

A2 - Sierra, Carles

PB - IJCAI

CY - Melbourne

Y2 - 21 August 2017 through 25 August 2017

ER -