Home > Research > Publications & Outputs > Interpretable policies for reinforcement learni...

Electronic data

  • Binder1

    Rights statement: This is the author’s version of a work that was accepted for publication in Engineering Applications of Artificial Intelligence. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Engineering Applications of Artificial Intelligence, 91, 2020 DOI: 10.1016/j.engappai.2020.103559

    Accepted author manuscript, 2.04 MB, PDF document

    Available under license: CC BY-NC-ND

Links

Text available via DOI:

View graph of relations

Interpretable policies for reinforcement learning by empirical fuzzy sets

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Interpretable policies for reinforcement learning by empirical fuzzy sets. / Huang, J.; Angelov, Plamen P.; Yin, C.
In: Engineering Applications of Artificial Intelligence, Vol. 91, 103559, 31.05.2020.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Huang, J, Angelov, PP & Yin, C 2020, 'Interpretable policies for reinforcement learning by empirical fuzzy sets', Engineering Applications of Artificial Intelligence, vol. 91, 103559. https://doi.org/10.1016/j.engappai.2020.103559

APA

Huang, J., Angelov, P. P., & Yin, C. (2020). Interpretable policies for reinforcement learning by empirical fuzzy sets. Engineering Applications of Artificial Intelligence, 91, Article 103559. https://doi.org/10.1016/j.engappai.2020.103559

Vancouver

Huang J, Angelov PP, Yin C. Interpretable policies for reinforcement learning by empirical fuzzy sets. Engineering Applications of Artificial Intelligence. 2020 May 31;91:103559. Epub 2020 Feb 27. doi: 10.1016/j.engappai.2020.103559

Author

Huang, J. ; Angelov, Plamen P. ; Yin, C. / Interpretable policies for reinforcement learning by empirical fuzzy sets. In: Engineering Applications of Artificial Intelligence. 2020 ; Vol. 91.

Bibtex

@article{9bf686b5c9b04644a4b60cef96a643f4,
title = "Interpretable policies for reinforcement learning by empirical fuzzy sets",
abstract = "This paper proposes a method and an algorithm to implement interpretable fuzzy reinforcement learning (IFRL). It provides alternative solutions to common problems in RL, like function approximation and continuous action space. The learning process resembles that of human beings by clustering the encountered states, developing experiences for each of the typical cases, and making decisions fuzzily. The learned policy can be expressed as human-intelligible IF-THEN rules, which facilitates further investigation and improvement. It adopts the actor–critic architecture whereas being different from mainstream policy gradient methods. The value function is approximated through the fuzzy system AnYa. The state–action space is discretized into a static grid with nodes. Each node is treated as one prototype and corresponds to one fuzzy rule, with the value of the node being the consequent. Values of consequents are updated using the Sarsa() algorithm. Probability distribution of optimal actions regarding different states is estimated through Empirical Data Analytics (EDA), Autonomous Learning Multi-Model Systems (ALMMo), and Empirical Fuzzy Sets (εFS). The fuzzy kernel of IFRL avoids the lack of interpretability in other methods based on neural networks. Simulation results with four problems, namely Mountain Car, Continuous Gridworld, Pendulum Position, and Tank Level Control, are presented as a proof of the proposed concept.",
keywords = "Interpretable fuzzy systems, Reinforcement learning, Probability distribution learning, Autonomous learning systems, AnYa type fuzzy systems, Empirical Fuzzy Sets",
author = "J. Huang and Angelov, {Plamen P.} and C. Yin",
note = "This is the author{\textquoteright}s version of a work that was accepted for publication in Engineering Applications of Artificial Intelligence. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Engineering Applications of Artificial Intelligence, 91, 2020 DOI: 10.1016/j.engappai.2020.103559",
year = "2020",
month = may,
day = "31",
doi = "10.1016/j.engappai.2020.103559",
language = "English",
volume = "91",
journal = "Engineering Applications of Artificial Intelligence",
issn = "0952-1976",
publisher = "Elsevier Limited",

}

RIS

TY - JOUR

T1 - Interpretable policies for reinforcement learning by empirical fuzzy sets

AU - Huang, J.

AU - Angelov, Plamen P.

AU - Yin, C.

N1 - This is the author’s version of a work that was accepted for publication in Engineering Applications of Artificial Intelligence. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Engineering Applications of Artificial Intelligence, 91, 2020 DOI: 10.1016/j.engappai.2020.103559

PY - 2020/5/31

Y1 - 2020/5/31

N2 - This paper proposes a method and an algorithm to implement interpretable fuzzy reinforcement learning (IFRL). It provides alternative solutions to common problems in RL, like function approximation and continuous action space. The learning process resembles that of human beings by clustering the encountered states, developing experiences for each of the typical cases, and making decisions fuzzily. The learned policy can be expressed as human-intelligible IF-THEN rules, which facilitates further investigation and improvement. It adopts the actor–critic architecture whereas being different from mainstream policy gradient methods. The value function is approximated through the fuzzy system AnYa. The state–action space is discretized into a static grid with nodes. Each node is treated as one prototype and corresponds to one fuzzy rule, with the value of the node being the consequent. Values of consequents are updated using the Sarsa() algorithm. Probability distribution of optimal actions regarding different states is estimated through Empirical Data Analytics (EDA), Autonomous Learning Multi-Model Systems (ALMMo), and Empirical Fuzzy Sets (εFS). The fuzzy kernel of IFRL avoids the lack of interpretability in other methods based on neural networks. Simulation results with four problems, namely Mountain Car, Continuous Gridworld, Pendulum Position, and Tank Level Control, are presented as a proof of the proposed concept.

AB - This paper proposes a method and an algorithm to implement interpretable fuzzy reinforcement learning (IFRL). It provides alternative solutions to common problems in RL, like function approximation and continuous action space. The learning process resembles that of human beings by clustering the encountered states, developing experiences for each of the typical cases, and making decisions fuzzily. The learned policy can be expressed as human-intelligible IF-THEN rules, which facilitates further investigation and improvement. It adopts the actor–critic architecture whereas being different from mainstream policy gradient methods. The value function is approximated through the fuzzy system AnYa. The state–action space is discretized into a static grid with nodes. Each node is treated as one prototype and corresponds to one fuzzy rule, with the value of the node being the consequent. Values of consequents are updated using the Sarsa() algorithm. Probability distribution of optimal actions regarding different states is estimated through Empirical Data Analytics (EDA), Autonomous Learning Multi-Model Systems (ALMMo), and Empirical Fuzzy Sets (εFS). The fuzzy kernel of IFRL avoids the lack of interpretability in other methods based on neural networks. Simulation results with four problems, namely Mountain Car, Continuous Gridworld, Pendulum Position, and Tank Level Control, are presented as a proof of the proposed concept.

KW - Interpretable fuzzy systems

KW - Reinforcement learning

KW - Probability distribution learning

KW - Autonomous learning systems

KW - AnYa type fuzzy systems

KW - Empirical Fuzzy Sets

U2 - 10.1016/j.engappai.2020.103559

DO - 10.1016/j.engappai.2020.103559

M3 - Journal article

VL - 91

JO - Engineering Applications of Artificial Intelligence

JF - Engineering Applications of Artificial Intelligence

SN - 0952-1976

M1 - 103559

ER -