Home > Research > Publications & Outputs > Unsupervised Deep Video Hashing via Balanced Co...

Electronic data

  • TIP_author accepted manuscript

    Rights statement: ©2018 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Accepted author manuscript, 3.51 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Unsupervised Deep Video Hashing via Balanced Code for Large-Scale Video Retrieval

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Unsupervised Deep Video Hashing via Balanced Code for Large-Scale Video Retrieval. / Wu, G.; Han, Jungong; Guo, Y. et al.
In: IEEE Transactions on Image Processing, Vol. 28, No. 4, 04.2019, p. 1993-2007.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Wu, G, Han, J, Guo, Y, Liu, L, Ding, G, Ni, Q & Shao, L 2019, 'Unsupervised Deep Video Hashing via Balanced Code for Large-Scale Video Retrieval', IEEE Transactions on Image Processing, vol. 28, no. 4, pp. 1993-2007. https://doi.org/10.1109/TIP.2018.2882155

APA

Wu, G., Han, J., Guo, Y., Liu, L., Ding, G., Ni, Q., & Shao, L. (2019). Unsupervised Deep Video Hashing via Balanced Code for Large-Scale Video Retrieval. IEEE Transactions on Image Processing, 28(4), 1993-2007. https://doi.org/10.1109/TIP.2018.2882155

Vancouver

Wu G, Han J, Guo Y, Liu L, Ding G, Ni Q et al. Unsupervised Deep Video Hashing via Balanced Code for Large-Scale Video Retrieval. IEEE Transactions on Image Processing. 2019 Apr;28(4):1993-2007. Epub 2018 Nov 19. doi: 10.1109/TIP.2018.2882155

Author

Wu, G. ; Han, Jungong ; Guo, Y. et al. / Unsupervised Deep Video Hashing via Balanced Code for Large-Scale Video Retrieval. In: IEEE Transactions on Image Processing. 2019 ; Vol. 28, No. 4. pp. 1993-2007.

Bibtex

@article{07f3a0bce1fc4f12ba7f820b78e5b200,
title = "Unsupervised Deep Video Hashing via Balanced Code for Large-Scale Video Retrieval",
abstract = "This paper proposes a deep hashing framework, namely, unsupervised deep video hashing (UDVH), for large-scale video similarity search with the aim to learn compact yet effective binary codes. Our UDVH produces the hash codes in a self-taught manner by jointly integrating discriminative video representation with optimal code learning, where an efficient alternating approach is adopted to optimize the objective function. The key differences from most existing video hashing methods lie in: 1) UDVH is an unsupervised hashing method that generates hash codes by cooperatively utilizing feature clustering and a specifically designed binarization with the original neighborhood structure preserved in the binary space and 2) a specific rotation is developed and applied onto video features such that the variance of each dimension can be balanced, thus facilitating the subsequent quantization step. Extensive experiments performed on three popular video datasets show that the UDVH is overwhelmingly better than the state of the arts in terms of various evaluation metrics, which makes it practical in real-world applications. {\textcopyright} 1992-2012 IEEE.",
keywords = "balanced rotation, deep learning, feature representation, similarity retrieval, Video hashing, Codes (symbols), Hash functions, Optimal systems, Feature clustering, Feature representation, Neighborhood structure, Objective functions, Similarity retrieval, Video representations, Video similarity search, Deep learning",
author = "G. Wu and Jungong Han and Y. Guo and L. Liu and Guiguang Ding and Q. Ni and L. Shao",
note = "{\textcopyright}2018 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.",
year = "2019",
month = apr,
doi = "10.1109/TIP.2018.2882155",
language = "English",
volume = "28",
pages = "1993--2007",
journal = "IEEE Transactions on Image Processing",
issn = "1057-7149",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "4",

}

RIS

TY - JOUR

T1 - Unsupervised Deep Video Hashing via Balanced Code for Large-Scale Video Retrieval

AU - Wu, G.

AU - Han, Jungong

AU - Guo, Y.

AU - Liu, L.

AU - Ding, Guiguang

AU - Ni, Q.

AU - Shao, L.

N1 - ©2018 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

PY - 2019/4

Y1 - 2019/4

N2 - This paper proposes a deep hashing framework, namely, unsupervised deep video hashing (UDVH), for large-scale video similarity search with the aim to learn compact yet effective binary codes. Our UDVH produces the hash codes in a self-taught manner by jointly integrating discriminative video representation with optimal code learning, where an efficient alternating approach is adopted to optimize the objective function. The key differences from most existing video hashing methods lie in: 1) UDVH is an unsupervised hashing method that generates hash codes by cooperatively utilizing feature clustering and a specifically designed binarization with the original neighborhood structure preserved in the binary space and 2) a specific rotation is developed and applied onto video features such that the variance of each dimension can be balanced, thus facilitating the subsequent quantization step. Extensive experiments performed on three popular video datasets show that the UDVH is overwhelmingly better than the state of the arts in terms of various evaluation metrics, which makes it practical in real-world applications. © 1992-2012 IEEE.

AB - This paper proposes a deep hashing framework, namely, unsupervised deep video hashing (UDVH), for large-scale video similarity search with the aim to learn compact yet effective binary codes. Our UDVH produces the hash codes in a self-taught manner by jointly integrating discriminative video representation with optimal code learning, where an efficient alternating approach is adopted to optimize the objective function. The key differences from most existing video hashing methods lie in: 1) UDVH is an unsupervised hashing method that generates hash codes by cooperatively utilizing feature clustering and a specifically designed binarization with the original neighborhood structure preserved in the binary space and 2) a specific rotation is developed and applied onto video features such that the variance of each dimension can be balanced, thus facilitating the subsequent quantization step. Extensive experiments performed on three popular video datasets show that the UDVH is overwhelmingly better than the state of the arts in terms of various evaluation metrics, which makes it practical in real-world applications. © 1992-2012 IEEE.

KW - balanced rotation

KW - deep learning

KW - feature representation

KW - similarity retrieval

KW - Video hashing

KW - Codes (symbols)

KW - Hash functions

KW - Optimal systems

KW - Feature clustering

KW - Feature representation

KW - Neighborhood structure

KW - Objective functions

KW - Similarity retrieval

KW - Video representations

KW - Video similarity search

KW - Deep learning

U2 - 10.1109/TIP.2018.2882155

DO - 10.1109/TIP.2018.2882155

M3 - Journal article

VL - 28

SP - 1993

EP - 2007

JO - IEEE Transactions on Image Processing

JF - IEEE Transactions on Image Processing

SN - 1057-7149

IS - 4

ER -