Home > Research > Publications & Outputs > A Self-Adaptive Synthetic Over-Sampling Techniq...

Electronic data

  • 1911.11018

    Rights statement: This is the peer reviewed version of the following article: Gu, X, Angelov, PP, Soares, EA. A self‐adaptive synthetic over‐sampling technique for imbalanced classification. Int J Intell Syst. 2020; 923-943. https://doi.org/10.1002/int.22230 which has been published in final form at https://onlinelibrary.wiley.com/doi/abs/10.1002/int.22230 This article may be used for non-commercial purposes in accordance With Wiley Terms and Conditions for self-archiving.

    Accepted author manuscript, 837 KB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

A Self-Adaptive Synthetic Over-Sampling Technique for Imbalanced Classification

Research output: Contribution to journalJournal articlepeer-review

Published

Standard

A Self-Adaptive Synthetic Over-Sampling Technique for Imbalanced Classification. / Gu, Xiaowei; Angelov, Plamen; Almeida Soares, Eduardo.

In: International Journal of Intelligent Systems, Vol. 35, No. 6, 01.06.2020, p. 923-943.

Research output: Contribution to journalJournal articlepeer-review

Harvard

APA

Vancouver

Author

Gu, Xiaowei ; Angelov, Plamen ; Almeida Soares, Eduardo. / A Self-Adaptive Synthetic Over-Sampling Technique for Imbalanced Classification. In: International Journal of Intelligent Systems. 2020 ; Vol. 35, No. 6. pp. 923-943.

Bibtex

@article{6703bb66e51e402caeb2161cf33a02f4,
title = "A Self-Adaptive Synthetic Over-Sampling Technique for Imbalanced Classification",
abstract = "Traditionally, in supervised machine learning, (a significant) part of the available data (usually 50%-80%) is used for training and the rest-for validation. In many problems, however, the data are highly imbalanced in regard to different classes or does not have good coverage of the feasible data space which, in turn, creates problems in validation and usage phase. In this paper, we propose a technique for synthesizing feasible and likely data to help balance the classes as well as to boost the performance in terms of confusion matrix as well as overall. The idea, in a nutshell, is to synthesize data samples in close vicinity to the actual data samples specifically for the less represented (minority) classes. This has also implications to the so-called fairness of machine learning. In this paper, we propose a specific method for synthesizing data in a way to balance the classes and boost the performance, especially of the minority classes. It is generic and can be applied to different base algorithms, for example, support vector machines, k-nearest neighbour classifiers deep neural, rule-based classifiers, decision trees, and so forth. The results demonstrated that (a) a significantly more balanced (and fair) classification results can be achieved and (b) that the overall performance as well as the performance per class measured by confusion matrix can be boosted. In addition, this approach can be very valuable for the cases when the number of actual available labelled data is small which itself is one of the problems of the contemporary machine learning.",
author = "Xiaowei Gu and Plamen Angelov and {Almeida Soares}, Eduardo",
note = "This is the peer reviewed version of the following article: Gu, X, Angelov, PP, Soares, EA. A self‐adaptive synthetic over‐sampling technique for imbalanced classification. Int J Intell Syst. 2020; 923-943. https://doi.org/10.1002/int.22230 which has been published in final form at https://onlinelibrary.wiley.com/doi/abs/10.1002/int.22230 This article may be used for non-commercial purposes in accordance With Wiley Terms and Conditions for self-archiving. ",
year = "2020",
month = jun,
day = "1",
doi = "10.1002/int.22230",
language = "English",
volume = "35",
pages = "923--943",
journal = "International Journal of Intelligent Systems",
issn = "0884-8173",
publisher = "John Wiley and Sons Ltd",
number = "6",

}

RIS

TY - JOUR

T1 - A Self-Adaptive Synthetic Over-Sampling Technique for Imbalanced Classification

AU - Gu, Xiaowei

AU - Angelov, Plamen

AU - Almeida Soares, Eduardo

N1 - This is the peer reviewed version of the following article: Gu, X, Angelov, PP, Soares, EA. A self‐adaptive synthetic over‐sampling technique for imbalanced classification. Int J Intell Syst. 2020; 923-943. https://doi.org/10.1002/int.22230 which has been published in final form at https://onlinelibrary.wiley.com/doi/abs/10.1002/int.22230 This article may be used for non-commercial purposes in accordance With Wiley Terms and Conditions for self-archiving.

PY - 2020/6/1

Y1 - 2020/6/1

N2 - Traditionally, in supervised machine learning, (a significant) part of the available data (usually 50%-80%) is used for training and the rest-for validation. In many problems, however, the data are highly imbalanced in regard to different classes or does not have good coverage of the feasible data space which, in turn, creates problems in validation and usage phase. In this paper, we propose a technique for synthesizing feasible and likely data to help balance the classes as well as to boost the performance in terms of confusion matrix as well as overall. The idea, in a nutshell, is to synthesize data samples in close vicinity to the actual data samples specifically for the less represented (minority) classes. This has also implications to the so-called fairness of machine learning. In this paper, we propose a specific method for synthesizing data in a way to balance the classes and boost the performance, especially of the minority classes. It is generic and can be applied to different base algorithms, for example, support vector machines, k-nearest neighbour classifiers deep neural, rule-based classifiers, decision trees, and so forth. The results demonstrated that (a) a significantly more balanced (and fair) classification results can be achieved and (b) that the overall performance as well as the performance per class measured by confusion matrix can be boosted. In addition, this approach can be very valuable for the cases when the number of actual available labelled data is small which itself is one of the problems of the contemporary machine learning.

AB - Traditionally, in supervised machine learning, (a significant) part of the available data (usually 50%-80%) is used for training and the rest-for validation. In many problems, however, the data are highly imbalanced in regard to different classes or does not have good coverage of the feasible data space which, in turn, creates problems in validation and usage phase. In this paper, we propose a technique for synthesizing feasible and likely data to help balance the classes as well as to boost the performance in terms of confusion matrix as well as overall. The idea, in a nutshell, is to synthesize data samples in close vicinity to the actual data samples specifically for the less represented (minority) classes. This has also implications to the so-called fairness of machine learning. In this paper, we propose a specific method for synthesizing data in a way to balance the classes and boost the performance, especially of the minority classes. It is generic and can be applied to different base algorithms, for example, support vector machines, k-nearest neighbour classifiers deep neural, rule-based classifiers, decision trees, and so forth. The results demonstrated that (a) a significantly more balanced (and fair) classification results can be achieved and (b) that the overall performance as well as the performance per class measured by confusion matrix can be boosted. In addition, this approach can be very valuable for the cases when the number of actual available labelled data is small which itself is one of the problems of the contemporary machine learning.

U2 - 10.1002/int.22230

DO - 10.1002/int.22230

M3 - Journal article

VL - 35

SP - 923

EP - 943

JO - International Journal of Intelligent Systems

JF - International Journal of Intelligent Systems

SN - 0884-8173

IS - 6

ER -