Home > Research > Publications & Outputs > Self-organizing fuzzy inference ensemble system...

Electronic data

  • SOFEnsemble

    Rights statement: This is the author’s version of a work that was accepted for publication in Knowledge-Based Systems. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Knowledge-Based Systems, 218, 2021 DOI: 10.1016/j.knosys.2021.106870

    Accepted author manuscript, 710 KB, PDF document

    Embargo ends: 18/02/22

    Available under license: CC BY-NC-ND

Links

Text available via DOI:

View graph of relations

Self-organizing fuzzy inference ensemble system for big streaming data classification

Research output: Contribution to journalJournal articlepeer-review

Published
Article number106870
<mark>Journal publication date</mark>22/04/2021
<mark>Journal</mark>Knowledge-Based Systems
Volume218
Number of pages13
Publication StatusPublished
Early online date18/02/21
<mark>Original language</mark>English

Abstract

An evolving intelligent system (EIS) is able to self-update its system structure and meta-parameters from streaming data. However, since the majority of EISs are implemented on a single-model architecture, their performances on large-scale, complex data streams are often limited. To address this deficiency, a novel self-organizing fuzzy inference ensemble framework is proposed in this paper. As the base learner of the proposed ensemble system, the self-organizing fuzzy inference system is capable of self-learning a highly transparent predictive model from streaming data on a chunk-by-chunk basis through a human-interpretable process. Very importantly, the base learner can continuously self-adjust its decision boundaries based on the inter-class and intra-class distances between prototypes identified from successive data chunks for higher classification precision. Thanks to its parallel distributed computing architecture, the proposed ensemble framework can achieve great classification precision while maintain high computational efficiency on large-scale problems. Numerical examples based on popular benchmark big data problems demonstrate the superior performance of the proposed approach over the state-of-the-art alternatives in terms of both classification accuracy and computational efficiency.

Bibliographic note

This is the author’s version of a work that was accepted for publication in Knowledge-Based Systems. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Knowledge-Based Systems, 218, 2021 DOI: 10.1016/j.knosys.2021.106870