Home > Research > Publications & Outputs > Particle Swarm Optimized Autonomous Learning Fu...

Electronic data

  • PSOALMMo

    Rights statement: ©2020 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Accepted author manuscript, 665 KB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Particle Swarm Optimized Autonomous Learning Fuzzy System

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
<mark>Journal publication date</mark>30/11/2021
<mark>Journal</mark>IEEE Transactions on Cybernetics
Issue number11
Volume51
Number of pages12
Pages (from-to)5352-5363
Publication StatusPublished
Early online date20/02/20
<mark>Original language</mark>English

Abstract

The antecedent and consequent parts of a first-order evolving intelligent system (EIS) determine the validity of the learning results and overall system performance. Nonetheless, the state-of-the-art techniques mostly stress on the novelty from the system identification point of view but pay less attention to the optimality of the learned parameters. Using the recently introduced autonomous learning multiple model (ALMMo) system as the implementation basis, this paper introduces a particles warm-based approach for EIS optimization. The proposed approach is able to simultaneously optimize the antecedent and consequent parameters of ALMMo and effectively enhance the system performance by iteratively searching for optimal solutions in the problem spaces. In addition, the proposed optimization approach does not adversely influence the “one pass” learning ability of ALMMo. Once the optimization process is complete, ALMMo can continue to learn from new data to incorporate unseen data patterns recursively without a full retraining. Experimental studies with a number of real-world benchmark problems validate the proposed concept and general principles. It is also verified that the proposed optimization approach can be applied to other types of EISs with similar operating mechanisms.

Bibliographic note

©2020 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.