Home > Research > Publications & Outputs > End-to-End Feature-Aware Label Space Encoding f...

Associated organisational unit

Electronic data

  • ZijiaTNNLS

    Rights statement: ©2018 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Accepted author manuscript, 527 KB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

End-to-End Feature-Aware Label Space Encoding for Multilabel Classification With Many Classes

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

End-to-End Feature-Aware Label Space Encoding for Multilabel Classification With Many Classes. / Lin, Zijia; Ding, Guiguang; Han, Jungong et al.
In: IEEE Transactions on Neural Networks, Vol. 29, No. 6, 06.2018, p. 2472-2487.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Lin, Z, Ding, G, Han, J & Shao, L 2018, 'End-to-End Feature-Aware Label Space Encoding for Multilabel Classification With Many Classes', IEEE Transactions on Neural Networks, vol. 29, no. 6, pp. 2472-2487. https://doi.org/10.1109/TNNLS.2017.2691545

APA

Lin, Z., Ding, G., Han, J., & Shao, L. (2018). End-to-End Feature-Aware Label Space Encoding for Multilabel Classification With Many Classes. IEEE Transactions on Neural Networks, 29(6), 2472-2487. https://doi.org/10.1109/TNNLS.2017.2691545

Vancouver

Lin Z, Ding G, Han J, Shao L. End-to-End Feature-Aware Label Space Encoding for Multilabel Classification With Many Classes. IEEE Transactions on Neural Networks. 2018 Jun;29(6):2472-2487. Epub 2017 May 9. doi: 10.1109/TNNLS.2017.2691545

Author

Lin, Zijia ; Ding, Guiguang ; Han, Jungong et al. / End-to-End Feature-Aware Label Space Encoding for Multilabel Classification With Many Classes. In: IEEE Transactions on Neural Networks. 2018 ; Vol. 29, No. 6. pp. 2472-2487.

Bibtex

@article{aab06dd743c44b4cbde1796cb121aa72,
title = "End-to-End Feature-Aware Label Space Encoding for Multilabel Classification With Many Classes",
abstract = "To make the problem of multilabel classification with many classes more tractable, in recent years, academia has seen efforts devoted to performing label space dimension reduction (LSDR). Specifically, LSDR encodes high-dimensional label vectors into low-dimensional code vectors lying in a latent space, so as to train predictive models at much lower costs. With respect to the prediction, it performs classification for any unseen instance by recovering a label vector from its predicted code vector via a decoding process. In this paper, we propose a novel method, namely End-to-End Feature-aware label space Encoding (E²FE), to perform LSDR. Instead of requiring an encoding function like most previous works, E²FE directly learns a code matrix formed by code vectors of the training instances in an end-to-end manner. Another distinct property of E²FE is its feature awareness attributable to the fact that the code matrix is learned by jointly maximizing the recoverability of the label space and the predictability of the latent space. Based on the learned code matrix, E²FE further trains predictive models to map instance features into code vectors, and also learns a linear decoding matrix for efficiently recovering the label vector of any unseen instance from its predicted code vector. Theoretical analyses show that both the code matrix and the linear decoding matrix in E²FE can be efficiently learned. Moreover, similar to previous works, E²FE can be specified to learn an encoding function. And it can also be extended with kernel tricks to handle nonlinear correlations between the feature space and the latent space. Comprehensive experiments conducted on diverse benchmark data sets with many classes show consistent performance gains of E²FE over the state-of-the-art methods.",
keywords = "End-to-end feature-aware label space encoding, label space dimension reduction (LSDR), multilabel classification",
author = "Zijia Lin and Guiguang Ding and Jungong Han and Ling Shao",
note = "{\textcopyright}2018 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.",
year = "2018",
month = jun,
doi = "10.1109/TNNLS.2017.2691545",
language = "English",
volume = "29",
pages = "2472--2487",
journal = "IEEE Transactions on Neural Networks",
issn = "1045-9227",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "6",

}

RIS

TY - JOUR

T1 - End-to-End Feature-Aware Label Space Encoding for Multilabel Classification With Many Classes

AU - Lin, Zijia

AU - Ding, Guiguang

AU - Han, Jungong

AU - Shao, Ling

N1 - ©2018 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

PY - 2018/6

Y1 - 2018/6

N2 - To make the problem of multilabel classification with many classes more tractable, in recent years, academia has seen efforts devoted to performing label space dimension reduction (LSDR). Specifically, LSDR encodes high-dimensional label vectors into low-dimensional code vectors lying in a latent space, so as to train predictive models at much lower costs. With respect to the prediction, it performs classification for any unseen instance by recovering a label vector from its predicted code vector via a decoding process. In this paper, we propose a novel method, namely End-to-End Feature-aware label space Encoding (E²FE), to perform LSDR. Instead of requiring an encoding function like most previous works, E²FE directly learns a code matrix formed by code vectors of the training instances in an end-to-end manner. Another distinct property of E²FE is its feature awareness attributable to the fact that the code matrix is learned by jointly maximizing the recoverability of the label space and the predictability of the latent space. Based on the learned code matrix, E²FE further trains predictive models to map instance features into code vectors, and also learns a linear decoding matrix for efficiently recovering the label vector of any unseen instance from its predicted code vector. Theoretical analyses show that both the code matrix and the linear decoding matrix in E²FE can be efficiently learned. Moreover, similar to previous works, E²FE can be specified to learn an encoding function. And it can also be extended with kernel tricks to handle nonlinear correlations between the feature space and the latent space. Comprehensive experiments conducted on diverse benchmark data sets with many classes show consistent performance gains of E²FE over the state-of-the-art methods.

AB - To make the problem of multilabel classification with many classes more tractable, in recent years, academia has seen efforts devoted to performing label space dimension reduction (LSDR). Specifically, LSDR encodes high-dimensional label vectors into low-dimensional code vectors lying in a latent space, so as to train predictive models at much lower costs. With respect to the prediction, it performs classification for any unseen instance by recovering a label vector from its predicted code vector via a decoding process. In this paper, we propose a novel method, namely End-to-End Feature-aware label space Encoding (E²FE), to perform LSDR. Instead of requiring an encoding function like most previous works, E²FE directly learns a code matrix formed by code vectors of the training instances in an end-to-end manner. Another distinct property of E²FE is its feature awareness attributable to the fact that the code matrix is learned by jointly maximizing the recoverability of the label space and the predictability of the latent space. Based on the learned code matrix, E²FE further trains predictive models to map instance features into code vectors, and also learns a linear decoding matrix for efficiently recovering the label vector of any unseen instance from its predicted code vector. Theoretical analyses show that both the code matrix and the linear decoding matrix in E²FE can be efficiently learned. Moreover, similar to previous works, E²FE can be specified to learn an encoding function. And it can also be extended with kernel tricks to handle nonlinear correlations between the feature space and the latent space. Comprehensive experiments conducted on diverse benchmark data sets with many classes show consistent performance gains of E²FE over the state-of-the-art methods.

KW - End-to-end feature-aware label space encoding

KW - label space dimension reduction (LSDR)

KW - multilabel classification

U2 - 10.1109/TNNLS.2017.2691545

DO - 10.1109/TNNLS.2017.2691545

M3 - Journal article

VL - 29

SP - 2472

EP - 2487

JO - IEEE Transactions on Neural Networks

JF - IEEE Transactions on Neural Networks

SN - 1045-9227

IS - 6

ER -