Home > Research > Publications & Outputs > Class-specific synthesized dictionary model for...

Electronic data

  • NEUCOM-D-18-01958.R1

    Rights statement: This is the author’s version of a work that was accepted for publication in Neurocomputing. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Neurocomputing, 329, 2019 DOI: 10.1016/j.neucom.2018.10.069

    Accepted author manuscript, 2.09 MB, PDF document

    Available under license: CC BY-NC-ND

Links

Text available via DOI:

View graph of relations

Class-specific synthesized dictionary model for Zero-Shot Learning

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
  • Z. Ji
  • J. Wang
  • Y. Yu
  • Y. Pang
  • J. Han
Close
<mark>Journal publication date</mark>15/02/2019
<mark>Journal</mark>Neurocomputing
Volume329
Number of pages9
Pages (from-to)339-347
Publication StatusPublished
Early online date5/11/18
<mark>Original language</mark>English

Abstract

Zero-Shot Learning (ZSL) aims at recognizing unseen classes that are absent during the training stage. Unlike the existing approaches that learn a visual-semantic embedding model to bridge the low-level visual space and the high-level class prototype space, we propose a novel synthesized approach for addressing ZSL within a dictionary learning framework. Specifically, it learns both a dictionary matrix and a class-specific encoding matrix for each seen class to synthesize pseudo instances for unseen classes with auxiliary of seen class prototypes. This allows us to train the classifiers for the unseen classes with these pseudo instances. In this way, ZSL can be treated as a traditional classification task, which makes it applicable for traditional and generalized ZSL settings simultaneously. Extensive experimental results on four benchmark datasets (AwA, CUB, aPY, and SUN) demonstrate that our method yields competitive performances compared to state-of-the-art methods on both settings.

Bibliographic note

This is the author’s version of a work that was accepted for publication in Neurocomputing. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Neurocomputing, 329, 2019 DOI: 10.1016/j.neucom.2018.10.069