Home > Research > Publications & Outputs > Interpretable policies for reinforcement learni...

Electronic data

  • Binder1

    Rights statement: This is the author’s version of a work that was accepted for publication in Engineering Applications of Artificial Intelligence. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Engineering Applications of Artificial Intelligence, 91, 2020 DOI: 10.1016/j.engappai.2020.103559

    Accepted author manuscript, 2.04 MB, PDF document

    Available under license: CC BY-NC-ND

Links

Text available via DOI:

View graph of relations

Interpretable policies for reinforcement learning by empirical fuzzy sets

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
Article number103559
<mark>Journal publication date</mark>31/05/2020
<mark>Journal</mark>Engineering Applications of Artificial Intelligence
Volume91
Number of pages13
Publication StatusPublished
Early online date27/02/20
<mark>Original language</mark>English

Abstract

This paper proposes a method and an algorithm to implement interpretable fuzzy reinforcement learning (IFRL). It provides alternative solutions to common problems in RL, like function approximation and continuous action space. The learning process resembles that of human beings by clustering the encountered states, developing experiences for each of the typical cases, and making decisions fuzzily. The learned policy can be expressed as human-intelligible IF-THEN rules, which facilitates further investigation and improvement. It adopts the actor–critic architecture whereas being different from mainstream policy gradient methods. The value function is approximated through the fuzzy system AnYa. The state–action space is discretized into a static grid with nodes. Each node is treated as one prototype and corresponds to one fuzzy rule, with the value of the node being the consequent. Values of consequents are updated using the Sarsa() algorithm. Probability distribution of optimal actions regarding different states is estimated through Empirical Data Analytics (EDA), Autonomous Learning Multi-Model Systems (ALMMo), and Empirical Fuzzy Sets (
ε
FS). The fuzzy kernel of IFRL avoids the lack of interpretability in other methods based on neural networks. Simulation results with four problems, namely Mountain Car, Continuous Gridworld, Pendulum Position, and Tank Level Control, are presented as a proof of the proposed concept.

Bibliographic note

This is the author’s version of a work that was accepted for publication in Engineering Applications of Artificial Intelligence. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Engineering Applications of Artificial Intelligence, 91, 2020 DOI: 10.1016/j.engappai.2020.103559