Home > Research > Publications & Outputs > From Zero-shot Learning to Conventional Supervi...

Associated organisational unit

Electronic data

Links

View graph of relations

From Zero-shot Learning to Conventional Supervised Classification: Unseen Visual Data Synthesis

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

From Zero-shot Learning to Conventional Supervised Classification: Unseen Visual Data Synthesis. / Long, Yang; Liu, Li; Shao, Ling et al.
CVPR 2017. Computer Vision Foundation, 2017. p. 1627-1636.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

APA

Vancouver

Long Y, Liu L, Shao L, Shen F, Ding G, Han J. From Zero-shot Learning to Conventional Supervised Classification: Unseen Visual Data Synthesis. In CVPR 2017. Computer Vision Foundation. 2017. p. 1627-1636

Author

Long, Yang ; Liu, Li ; Shao, Ling et al. / From Zero-shot Learning to Conventional Supervised Classification: Unseen Visual Data Synthesis. CVPR 2017. Computer Vision Foundation, 2017. pp. 1627-1636

Bibtex

@inproceedings{78c31835b2524576aff9b9f160490efe,
title = "From Zero-shot Learning to Conventional Supervised Classification: Unseen Visual Data Synthesis",
abstract = "Robust object recognition systems usually rely on powerful feature extraction mechanisms from a large number of real images. However, in many realistic applications, collecting sufficient images for ever-growing new classes is unattainable. In this paper, we propose a new Zero-shot learning (ZSL) framework that can synthesise visual features for unseen classes without acquiring real images. Using the proposed Unseen Visual Data Synthesis (UVDS) algorithm, semantic attributes are effectively utilised as an intermediate clue to synthesise unseen visual features at the training stage. Hereafter, ZSL recognition is converted into the conventional supervised problem, i.e. the synthesised visual features can be straightforwardly fed to typical classifiers such as SVM. On four benchmark datasets, we demonstrate the benefit of using synthesised unseen data.Extensive experimental results suggest that our proposed approach significantly improve the state-of-the-art results.",
author = "Yang Long and Li Liu and Ling Shao and Fumin Shen and Guiguang Ding and Jungong Han",
year = "2017",
month = jul,
day = "22",
language = "English",
pages = "1627--1636",
booktitle = "CVPR 2017",
publisher = "Computer Vision Foundation",
note = "CVPR17 ; Conference date: 24-07-2017 Through 28-07-2017",

}

RIS

TY - GEN

T1 - From Zero-shot Learning to Conventional Supervised Classification: Unseen Visual Data Synthesis

AU - Long, Yang

AU - Liu, Li

AU - Shao, Ling

AU - Shen, Fumin

AU - Ding, Guiguang

AU - Han, Jungong

PY - 2017/7/22

Y1 - 2017/7/22

N2 - Robust object recognition systems usually rely on powerful feature extraction mechanisms from a large number of real images. However, in many realistic applications, collecting sufficient images for ever-growing new classes is unattainable. In this paper, we propose a new Zero-shot learning (ZSL) framework that can synthesise visual features for unseen classes without acquiring real images. Using the proposed Unseen Visual Data Synthesis (UVDS) algorithm, semantic attributes are effectively utilised as an intermediate clue to synthesise unseen visual features at the training stage. Hereafter, ZSL recognition is converted into the conventional supervised problem, i.e. the synthesised visual features can be straightforwardly fed to typical classifiers such as SVM. On four benchmark datasets, we demonstrate the benefit of using synthesised unseen data.Extensive experimental results suggest that our proposed approach significantly improve the state-of-the-art results.

AB - Robust object recognition systems usually rely on powerful feature extraction mechanisms from a large number of real images. However, in many realistic applications, collecting sufficient images for ever-growing new classes is unattainable. In this paper, we propose a new Zero-shot learning (ZSL) framework that can synthesise visual features for unseen classes without acquiring real images. Using the proposed Unseen Visual Data Synthesis (UVDS) algorithm, semantic attributes are effectively utilised as an intermediate clue to synthesise unseen visual features at the training stage. Hereafter, ZSL recognition is converted into the conventional supervised problem, i.e. the synthesised visual features can be straightforwardly fed to typical classifiers such as SVM. On four benchmark datasets, we demonstrate the benefit of using synthesised unseen data.Extensive experimental results suggest that our proposed approach significantly improve the state-of-the-art results.

M3 - Conference contribution/Paper

SP - 1627

EP - 1636

BT - CVPR 2017

PB - Computer Vision Foundation

T2 - CVPR17

Y2 - 24 July 2017 through 28 July 2017

ER -