Home > Research > Publications & Outputs > Zero-shot Learning with Transferred Samples

Associated organisational unit

Links

Text available via DOI:

View graph of relations

Zero-shot Learning with Transferred Samples

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Zero-shot Learning with Transferred Samples. / Guo, Yuchen; Ding, Guiguang; Han, Jungong et al.
In: IEEE Transactions on Image Processing, Vol. 26, No. 7, 07.2017, p. 3277-3290.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Guo, Y, Ding, G, Han, J & Gao, Y 2017, 'Zero-shot Learning with Transferred Samples', IEEE Transactions on Image Processing, vol. 26, no. 7, pp. 3277-3290. https://doi.org/10.1109/TIP.2017.2696747

APA

Guo, Y., Ding, G., Han, J., & Gao, Y. (2017). Zero-shot Learning with Transferred Samples. IEEE Transactions on Image Processing, 26(7), 3277-3290. https://doi.org/10.1109/TIP.2017.2696747

Vancouver

Guo Y, Ding G, Han J, Gao Y. Zero-shot Learning with Transferred Samples. IEEE Transactions on Image Processing. 2017 Jul;26(7):3277-3290. Epub 2017 Apr 24. doi: 10.1109/TIP.2017.2696747

Author

Guo, Yuchen ; Ding, Guiguang ; Han, Jungong et al. / Zero-shot Learning with Transferred Samples. In: IEEE Transactions on Image Processing. 2017 ; Vol. 26, No. 7. pp. 3277-3290.

Bibtex

@article{892c8586a8b049749b9e4f7c7a1c434d,
title = "Zero-shot Learning with Transferred Samples",
abstract = "By transferring knowledge from the abundant labeled samples of known source classes, zero-shot learning (ZSL) makes it possible to train recognition models for novel target classes that have no labeled samples. Conventional ZSL approaches usually adopt a two-step recognition strategy, in which the test sample is projected into an intermediary space in the first step, and then the recognition is carried out by considering the similarity between the sample and target classes in the intermediary space. Due to this redundant intermediate transformation, information loss is unavoidable, thus degrading the performance of overall system. Rather than adopting this two-step strategy, in this paper, we propose a novel one-step recognition framework that is able to perform recognition in the original feature space by using directly trained classifiers. To address the lack of labeled samples for training supervised classifiers for the target classes, we propose to transfer samples from source classes with pseudo labels assigned, in which the transferred samples are selected based on their transferability and diversity. Moreover, to account for the unreliability of pseudo labels of transferred samples, we modify the standard support vector machine formulation such that the unreliable positive samples can be recognized and suppressed in the training phase. The entire framework is fairly general with the possibility of further extensions to several common ZSL settings. Extensive experiments on four benchmark data sets demonstrate the superiority of the proposed framework, compared with the state-of-the-art approaches, in various settings.",
author = "Yuchen Guo and Guiguang Ding and Jungong Han and Yue Gao",
year = "2017",
month = jul,
doi = "10.1109/TIP.2017.2696747",
language = "English",
volume = "26",
pages = "3277--3290",
journal = "IEEE Transactions on Image Processing",
issn = "1057-7149",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "7",

}

RIS

TY - JOUR

T1 - Zero-shot Learning with Transferred Samples

AU - Guo, Yuchen

AU - Ding, Guiguang

AU - Han, Jungong

AU - Gao, Yue

PY - 2017/7

Y1 - 2017/7

N2 - By transferring knowledge from the abundant labeled samples of known source classes, zero-shot learning (ZSL) makes it possible to train recognition models for novel target classes that have no labeled samples. Conventional ZSL approaches usually adopt a two-step recognition strategy, in which the test sample is projected into an intermediary space in the first step, and then the recognition is carried out by considering the similarity between the sample and target classes in the intermediary space. Due to this redundant intermediate transformation, information loss is unavoidable, thus degrading the performance of overall system. Rather than adopting this two-step strategy, in this paper, we propose a novel one-step recognition framework that is able to perform recognition in the original feature space by using directly trained classifiers. To address the lack of labeled samples for training supervised classifiers for the target classes, we propose to transfer samples from source classes with pseudo labels assigned, in which the transferred samples are selected based on their transferability and diversity. Moreover, to account for the unreliability of pseudo labels of transferred samples, we modify the standard support vector machine formulation such that the unreliable positive samples can be recognized and suppressed in the training phase. The entire framework is fairly general with the possibility of further extensions to several common ZSL settings. Extensive experiments on four benchmark data sets demonstrate the superiority of the proposed framework, compared with the state-of-the-art approaches, in various settings.

AB - By transferring knowledge from the abundant labeled samples of known source classes, zero-shot learning (ZSL) makes it possible to train recognition models for novel target classes that have no labeled samples. Conventional ZSL approaches usually adopt a two-step recognition strategy, in which the test sample is projected into an intermediary space in the first step, and then the recognition is carried out by considering the similarity between the sample and target classes in the intermediary space. Due to this redundant intermediate transformation, information loss is unavoidable, thus degrading the performance of overall system. Rather than adopting this two-step strategy, in this paper, we propose a novel one-step recognition framework that is able to perform recognition in the original feature space by using directly trained classifiers. To address the lack of labeled samples for training supervised classifiers for the target classes, we propose to transfer samples from source classes with pseudo labels assigned, in which the transferred samples are selected based on their transferability and diversity. Moreover, to account for the unreliability of pseudo labels of transferred samples, we modify the standard support vector machine formulation such that the unreliable positive samples can be recognized and suppressed in the training phase. The entire framework is fairly general with the possibility of further extensions to several common ZSL settings. Extensive experiments on four benchmark data sets demonstrate the superiority of the proposed framework, compared with the state-of-the-art approaches, in various settings.

U2 - 10.1109/TIP.2017.2696747

DO - 10.1109/TIP.2017.2696747

M3 - Journal article

VL - 26

SP - 3277

EP - 3290

JO - IEEE Transactions on Image Processing

JF - IEEE Transactions on Image Processing

SN - 1057-7149

IS - 7

ER -