Home > Research > Publications & Outputs > Unsupervised Deep Video Hashing with Balanced R...

Associated organisational unit

Electronic data

Links

Text available via DOI:

View graph of relations

Unsupervised Deep Video Hashing with Balanced Rotation

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Unsupervised Deep Video Hashing with Balanced Rotation. / Wu, Gengshen; Liu, Li; Guo, Yuchen et al.
Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence. ed. / Carles Sierra. Melbourne: IJCAI, 2017. p. 3076-3082.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Wu, G, Liu, L, Guo, Y, Ding, G, Han, J, Shen, J & Shao, L 2017, Unsupervised Deep Video Hashing with Balanced Rotation. in C Sierra (ed.), Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence. IJCAI, Melbourne, pp. 3076-3082, IJCAI17, 21/08/17. https://doi.org/10.24963/ijcai.2017/429

APA

Wu, G., Liu, L., Guo, Y., Ding, G., Han, J., Shen, J., & Shao, L. (2017). Unsupervised Deep Video Hashing with Balanced Rotation. In C. Sierra (Ed.), Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (pp. 3076-3082). IJCAI. https://doi.org/10.24963/ijcai.2017/429

Vancouver

Wu G, Liu L, Guo Y, Ding G, Han J, Shen J et al. Unsupervised Deep Video Hashing with Balanced Rotation. In Sierra C, editor, Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence. Melbourne: IJCAI. 2017. p. 3076-3082 doi: 10.24963/ijcai.2017/429

Author

Wu, Gengshen ; Liu, Li ; Guo, Yuchen et al. / Unsupervised Deep Video Hashing with Balanced Rotation. Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence. editor / Carles Sierra. Melbourne : IJCAI, 2017. pp. 3076-3082

Bibtex

@inproceedings{884161fe380340bd80c3a36490fb2668,
title = "Unsupervised Deep Video Hashing with Balanced Rotation",
abstract = "Recently, hashing video contents for fast retrieval has received increasing attention due to the enormous growth of online videos. As the extension of image hashing techniques, traditional video hashing methods mainly focus on seeking the appropriate video features but pay little attention to how the video-specific features can be leveraged to achieve optimal binarization. In this paper, an end-to-end hashing framework, namely Unsupervised Deep Video Hashing (UDVH), is proposed, where feature extraction, balanced code learning and hash function learning are integrated and optimized in a self-taught manner. Particularly, distinguished from previous work, our framework enjoys two novelties: 1) an unsupervised hashing method that integrates the feature clustering and feature binarization, enabling the neighborhood structure to be preserved in the binary space; 2) a smart rotation applied to the video-specific features that are widely spread in the low-dimensional space such that the variance of dimensions can be balanced, thus generating more effective hash codes. Extensive experiments have been performed on two real-world datasets and the results demonstrate its superiority, compared to the state-of-the-art video hashing methods. To bootstrap further developments, the source code will be made publically available.",
author = "Gengshen Wu and Li Liu and Yuchen Guo and Guiguang Ding and Jungong Han and Jialie Shen and Ling Shao",
year = "2017",
month = aug,
day = "19",
doi = "10.24963/ijcai.2017/429",
language = "English",
pages = "3076--3082",
editor = "Carles Sierra",
booktitle = "Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence",
publisher = "IJCAI",
note = "IJCAI17 ; Conference date: 21-08-2017 Through 25-08-2017",

}

RIS

TY - GEN

T1 - Unsupervised Deep Video Hashing with Balanced Rotation

AU - Wu, Gengshen

AU - Liu, Li

AU - Guo, Yuchen

AU - Ding, Guiguang

AU - Han, Jungong

AU - Shen, Jialie

AU - Shao, Ling

PY - 2017/8/19

Y1 - 2017/8/19

N2 - Recently, hashing video contents for fast retrieval has received increasing attention due to the enormous growth of online videos. As the extension of image hashing techniques, traditional video hashing methods mainly focus on seeking the appropriate video features but pay little attention to how the video-specific features can be leveraged to achieve optimal binarization. In this paper, an end-to-end hashing framework, namely Unsupervised Deep Video Hashing (UDVH), is proposed, where feature extraction, balanced code learning and hash function learning are integrated and optimized in a self-taught manner. Particularly, distinguished from previous work, our framework enjoys two novelties: 1) an unsupervised hashing method that integrates the feature clustering and feature binarization, enabling the neighborhood structure to be preserved in the binary space; 2) a smart rotation applied to the video-specific features that are widely spread in the low-dimensional space such that the variance of dimensions can be balanced, thus generating more effective hash codes. Extensive experiments have been performed on two real-world datasets and the results demonstrate its superiority, compared to the state-of-the-art video hashing methods. To bootstrap further developments, the source code will be made publically available.

AB - Recently, hashing video contents for fast retrieval has received increasing attention due to the enormous growth of online videos. As the extension of image hashing techniques, traditional video hashing methods mainly focus on seeking the appropriate video features but pay little attention to how the video-specific features can be leveraged to achieve optimal binarization. In this paper, an end-to-end hashing framework, namely Unsupervised Deep Video Hashing (UDVH), is proposed, where feature extraction, balanced code learning and hash function learning are integrated and optimized in a self-taught manner. Particularly, distinguished from previous work, our framework enjoys two novelties: 1) an unsupervised hashing method that integrates the feature clustering and feature binarization, enabling the neighborhood structure to be preserved in the binary space; 2) a smart rotation applied to the video-specific features that are widely spread in the low-dimensional space such that the variance of dimensions can be balanced, thus generating more effective hash codes. Extensive experiments have been performed on two real-world datasets and the results demonstrate its superiority, compared to the state-of-the-art video hashing methods. To bootstrap further developments, the source code will be made publically available.

U2 - 10.24963/ijcai.2017/429

DO - 10.24963/ijcai.2017/429

M3 - Conference contribution/Paper

SP - 3076

EP - 3082

BT - Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence

A2 - Sierra, Carles

PB - IJCAI

CY - Melbourne

T2 - IJCAI17

Y2 - 21 August 2017 through 25 August 2017

ER -