Home > Research > Publications & Outputs > A Self-Adaptive Synthetic Over-Sampling Techniq...

Electronic data

  • 1911.11018

    Rights statement: This is the peer reviewed version of the following article: Gu, X, Angelov, PP, Soares, EA. A self‐adaptive synthetic over‐sampling technique for imbalanced classification. Int J Intell Syst. 2020; 923-943. https://doi.org/10.1002/int.22230 which has been published in final form at https://onlinelibrary.wiley.com/doi/abs/10.1002/int.22230 This article may be used for non-commercial purposes in accordance With Wiley Terms and Conditions for self-archiving.

    Accepted author manuscript, 837 KB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

A Self-Adaptive Synthetic Over-Sampling Technique for Imbalanced Classification

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
<mark>Journal publication date</mark>1/06/2020
<mark>Journal</mark>International Journal of Intelligent Systems
Issue number6
Volume35
Number of pages21
Pages (from-to)923-943
Publication StatusPublished
Early online date23/02/20
<mark>Original language</mark>English

Abstract

Traditionally, in supervised machine learning, (a significant) part of the available data (usually 50%-80%) is used for training and the rest-for validation. In many problems, however, the data are highly imbalanced in regard to different classes or does not have good coverage of the feasible data space which, in turn, creates problems in validation and usage phase. In this paper, we propose a technique for synthesizing feasible and likely data to help balance the classes as well as to boost the performance in terms of confusion matrix as well as overall. The idea, in a nutshell, is to synthesize data samples in close vicinity to the actual data samples specifically for the less represented (minority) classes. This has also implications to the so-called fairness of machine learning. In this paper, we propose a specific method for synthesizing data in a way to balance the classes and boost the performance, especially of the minority classes. It is generic and can be applied to different base algorithms, for example, support vector machines, k-nearest neighbour classifiers deep neural, rule-based classifiers, decision trees, and so forth. The results demonstrated that (a) a significantly more balanced (and fair) classification results can be achieved and (b) that the overall performance as well as the performance per class measured by confusion matrix can be boosted. In addition, this approach can be very valuable for the cases when the number of actual available labelled data is small which itself is one of the problems of the contemporary machine learning.

Bibliographic note

This is the peer reviewed version of the following article: Gu, X, Angelov, PP, Soares, EA. A self‐adaptive synthetic over‐sampling technique for imbalanced classification. Int J Intell Syst. 2020; 923-943. https://doi.org/10.1002/int.22230 which has been published in final form at https://onlinelibrary.wiley.com/doi/abs/10.1002/int.22230 This article may be used for non-commercial purposes in accordance With Wiley Terms and Conditions for self-archiving.