Home > Research > Publications & Outputs > Sequential Discrete Hashing for Scalable Cross-...

Associated organisational unit

Electronic data

  • CSDH_TIP_submit

    Rights statement: ©2017 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Accepted author manuscript, 1.15 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

View graph of relations

Sequential Discrete Hashing for Scalable Cross-modality Similarity Retrieval

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Sequential Discrete Hashing for Scalable Cross-modality Similarity Retrieval. / Liu, Li; Lin, Zijia; Shao, Ling et al.
In: IEEE Transactions on Image Processing, Vol. 26, No. 1, 01.2017, p. 107-118.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Liu, L, Lin, Z, Shao, L, Shen, F, Ding, G & Han, J 2017, 'Sequential Discrete Hashing for Scalable Cross-modality Similarity Retrieval', IEEE Transactions on Image Processing, vol. 26, no. 1, pp. 107-118.

APA

Liu, L., Lin, Z., Shao, L., Shen, F., Ding, G., & Han, J. (2017). Sequential Discrete Hashing for Scalable Cross-modality Similarity Retrieval. IEEE Transactions on Image Processing, 26(1), 107-118.

Vancouver

Liu L, Lin Z, Shao L, Shen F, Ding G, Han J. Sequential Discrete Hashing for Scalable Cross-modality Similarity Retrieval. IEEE Transactions on Image Processing. 2017 Jan;26(1):107-118. Epub 2016 Oct 20.

Author

Liu, Li ; Lin, Zijia ; Shao, Ling et al. / Sequential Discrete Hashing for Scalable Cross-modality Similarity Retrieval. In: IEEE Transactions on Image Processing. 2017 ; Vol. 26, No. 1. pp. 107-118.

Bibtex

@article{5c84c8e5009044df89742687911c6c5b,
title = "Sequential Discrete Hashing for Scalable Cross-modality Similarity Retrieval",
abstract = "With the dramatic development of the Internet, how to exploit large-scale retrieval techniques for multimodal web data has become one of the most popular but challenging problems in computer vision and multimedia. Recently, hashing methods are used for fast nearest neighbor search in large-scale data spaces, by embedding high-dimensional feature descriptors into a similarity preserving Hamming space with a low dimension. Inspired by this, in this paper, we introduce a novel supervised cross-modality hashing framework, which can generate unified binary codes for instances represented in different modalities. Particularly, in the learning phase, each bit of a code can be sequentially learned with a discrete optimization scheme that jointly minimizes its empirical loss based on a boosting strategy. In a bitwise manner, hash functions are then learned for each modality, mapping the corresponding representations into unified hash codes. We regard this approach as cross-modality sequential discrete hashing (CSDH), which can effectively reduce the quantization errors arisen in the oversimplified rounding-off step and thus lead to high-quality binary codes. In the test phase, a simple fusion scheme is utilized to generate a unified hash code for final retrieval by merging the predicted hashing results of an unseen instance from different modalities. The proposed CSDH has been systematically evaluated on three standard data sets: Wiki, MIRFlickr, and NUS-WIDE, and the results show that our method significantly outperforms the state-of-the-art multimodality hashing techniques.",
author = "Li Liu and Zijia Lin and Ling Shao and Fumin Shen and Guiguang Ding and Jungong Han",
year = "2017",
month = jan,
language = "English",
volume = "26",
pages = "107--118",
journal = "IEEE Transactions on Image Processing",
issn = "1057-7149",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "1",

}

RIS

TY - JOUR

T1 - Sequential Discrete Hashing for Scalable Cross-modality Similarity Retrieval

AU - Liu, Li

AU - Lin, Zijia

AU - Shao, Ling

AU - Shen, Fumin

AU - Ding, Guiguang

AU - Han, Jungong

PY - 2017/1

Y1 - 2017/1

N2 - With the dramatic development of the Internet, how to exploit large-scale retrieval techniques for multimodal web data has become one of the most popular but challenging problems in computer vision and multimedia. Recently, hashing methods are used for fast nearest neighbor search in large-scale data spaces, by embedding high-dimensional feature descriptors into a similarity preserving Hamming space with a low dimension. Inspired by this, in this paper, we introduce a novel supervised cross-modality hashing framework, which can generate unified binary codes for instances represented in different modalities. Particularly, in the learning phase, each bit of a code can be sequentially learned with a discrete optimization scheme that jointly minimizes its empirical loss based on a boosting strategy. In a bitwise manner, hash functions are then learned for each modality, mapping the corresponding representations into unified hash codes. We regard this approach as cross-modality sequential discrete hashing (CSDH), which can effectively reduce the quantization errors arisen in the oversimplified rounding-off step and thus lead to high-quality binary codes. In the test phase, a simple fusion scheme is utilized to generate a unified hash code for final retrieval by merging the predicted hashing results of an unseen instance from different modalities. The proposed CSDH has been systematically evaluated on three standard data sets: Wiki, MIRFlickr, and NUS-WIDE, and the results show that our method significantly outperforms the state-of-the-art multimodality hashing techniques.

AB - With the dramatic development of the Internet, how to exploit large-scale retrieval techniques for multimodal web data has become one of the most popular but challenging problems in computer vision and multimedia. Recently, hashing methods are used for fast nearest neighbor search in large-scale data spaces, by embedding high-dimensional feature descriptors into a similarity preserving Hamming space with a low dimension. Inspired by this, in this paper, we introduce a novel supervised cross-modality hashing framework, which can generate unified binary codes for instances represented in different modalities. Particularly, in the learning phase, each bit of a code can be sequentially learned with a discrete optimization scheme that jointly minimizes its empirical loss based on a boosting strategy. In a bitwise manner, hash functions are then learned for each modality, mapping the corresponding representations into unified hash codes. We regard this approach as cross-modality sequential discrete hashing (CSDH), which can effectively reduce the quantization errors arisen in the oversimplified rounding-off step and thus lead to high-quality binary codes. In the test phase, a simple fusion scheme is utilized to generate a unified hash code for final retrieval by merging the predicted hashing results of an unseen instance from different modalities. The proposed CSDH has been systematically evaluated on three standard data sets: Wiki, MIRFlickr, and NUS-WIDE, and the results show that our method significantly outperforms the state-of-the-art multimodality hashing techniques.

M3 - Journal article

VL - 26

SP - 107

EP - 118

JO - IEEE Transactions on Image Processing

JF - IEEE Transactions on Image Processing

SN - 1057-7149

IS - 1

ER -