Home > Research > Publications & Outputs > End-to-End Feature-Aware Label Space Encoding f...

Associated organisational unit

Electronic data

  • ZijiaTNNLS

    Rights statement: ©2018 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Accepted author manuscript, 527 KB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

End-to-End Feature-Aware Label Space Encoding for Multilabel Classification With Many Classes

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
Close
<mark>Journal publication date</mark>06/2018
<mark>Journal</mark>IEEE Transactions on Neural Networks
Issue number6
Volume29
Number of pages16
Pages (from-to)2472-2487
Publication StatusPublished
Early online date9/05/17
<mark>Original language</mark>English

Abstract

To make the problem of multilabel classification with many classes more tractable, in recent years, academia has seen efforts devoted to performing label space dimension reduction (LSDR). Specifically, LSDR encodes high-dimensional label vectors into low-dimensional code vectors lying in a latent space, so as to train predictive models at much lower costs. With respect to the prediction, it performs classification for any unseen instance by recovering a label vector from its predicted code vector via a decoding process. In this paper, we propose a novel method, namely End-to-End Feature-aware label space Encoding (E²FE), to perform LSDR. Instead of requiring an encoding function like most previous works, E²FE directly learns a code matrix formed by code vectors of the training instances in an end-to-end manner. Another distinct property of E²FE is its feature awareness attributable to the fact that the code matrix is learned by jointly maximizing the recoverability of the label space and the predictability of the latent space. Based on the learned code matrix, E²FE further trains predictive models to map instance features into code vectors, and also learns a linear decoding matrix for efficiently recovering the label vector of any unseen instance from its predicted code vector. Theoretical analyses show that both the code matrix and the linear decoding matrix in E²FE can be efficiently learned. Moreover, similar to previous works, E²FE can be specified to learn an encoding function. And it can also be extended with kernel tricks to handle nonlinear correlations between the feature space and the latent space. Comprehensive experiments conducted on diverse benchmark data sets with many classes show consistent performance gains of E²FE over the state-of-the-art methods.

Bibliographic note

©2018 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.