Home > Research > Publications & Outputs > Sampled-data Control through Model-Free Reinfor...

Electronic data

  • jai100018

    Proof, 1.34 MB, PDF document

    Available under license: CC BY-NC-ND: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Links

Text available via DOI:

View graph of relations

Sampled-data Control through Model-Free Reinforcement Learning with Effective Experience Replay

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Sampled-data Control through Model-Free Reinforcement Learning with Effective Experience Replay. / Xiao, Bo; Lam, Hak-Keung; Su, Xiaojie et al.
In: Journal of Automation and Intelligence, Vol. 2, No. 1, 28.02.2023.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Xiao, B, Lam, H-K, Su, X, Wang, Z, P.W.Lo, F, Chen, S & Yeatman, E 2023, 'Sampled-data Control through Model-Free Reinforcement Learning with Effective Experience Replay', Journal of Automation and Intelligence, vol. 2, no. 1. https://doi.org/10.1016/j.jai.2023.100018

APA

Xiao, B., Lam, H.-K., Su, X., Wang, Z., P.W.Lo, F., Chen, S., & Yeatman, E. (2023). Sampled-data Control through Model-Free Reinforcement Learning with Effective Experience Replay. Journal of Automation and Intelligence, 2(1). https://doi.org/10.1016/j.jai.2023.100018

Vancouver

Xiao B, Lam HK, Su X, Wang Z, P.W.Lo F, Chen S et al. Sampled-data Control through Model-Free Reinforcement Learning with Effective Experience Replay. Journal of Automation and Intelligence. 2023 Feb 28;2(1). Epub 2023 Feb 1. doi: 10.1016/j.jai.2023.100018

Author

Xiao, Bo ; Lam, Hak-Keung ; Su, Xiaojie et al. / Sampled-data Control through Model-Free Reinforcement Learning with Effective Experience Replay. In: Journal of Automation and Intelligence. 2023 ; Vol. 2, No. 1.

Bibtex

@article{4a8fe69bae6549da86b25dd947cae085,
title = "Sampled-data Control through Model-Free Reinforcement Learning with Effective Experience Replay",
abstract = "Reinforcement Learning (RL) based control algorithms can learn the control strategies for nonlinear and uncertain environment during interacting with it. Guided by the rewards generated by environment, a RL agent can learn the control strategy directly in a model-free way instead of investigating the dynamic model of the environment. In the paper, we propose the sampled-data RL control strategy to reduce the computational demand. In the sampled-data control strategy, the whole control system is of a hybrid structure, in which the plant is of continuous structure while the controller (RL agent) adopts a discrete structure. Given that the continuous states of the plant will be the input of the agent, the state–action value function is approximated by the fully connected feed-forward neural networks (FCFFNN). Instead of learning the controller at every step during the interaction with the environment, the learning and acting stages are decoupled to learn the control strategy more effectively through experience replay. In the acting stage, the most effective experience obtained during the interaction with the environment will be stored and during the learning stage, the stored experience will be replayed to customized times, which helps enhance the experience replay process.The effectiveness of proposed approach will be verified by simulation examples.",
keywords = "Reinforcement learning, Neural networks, Sampled-data control, Model-free, Effective experience replay",
author = "Bo Xiao and Hak-Keung Lam and Xiaojie Su and Ziwei Wang and Frank P.W.Lo and Shihong Chen and Eric Yeatman",
year = "2023",
month = feb,
day = "28",
doi = "10.1016/j.jai.2023.100018",
language = "English",
volume = "2",
journal = "Journal of Automation and Intelligence",
issn = "2949-8554",
number = "1",

}

RIS

TY - JOUR

T1 - Sampled-data Control through Model-Free Reinforcement Learning with Effective Experience Replay

AU - Xiao, Bo

AU - Lam, Hak-Keung

AU - Su, Xiaojie

AU - Wang, Ziwei

AU - P.W.Lo, Frank

AU - Chen, Shihong

AU - Yeatman, Eric

PY - 2023/2/28

Y1 - 2023/2/28

N2 - Reinforcement Learning (RL) based control algorithms can learn the control strategies for nonlinear and uncertain environment during interacting with it. Guided by the rewards generated by environment, a RL agent can learn the control strategy directly in a model-free way instead of investigating the dynamic model of the environment. In the paper, we propose the sampled-data RL control strategy to reduce the computational demand. In the sampled-data control strategy, the whole control system is of a hybrid structure, in which the plant is of continuous structure while the controller (RL agent) adopts a discrete structure. Given that the continuous states of the plant will be the input of the agent, the state–action value function is approximated by the fully connected feed-forward neural networks (FCFFNN). Instead of learning the controller at every step during the interaction with the environment, the learning and acting stages are decoupled to learn the control strategy more effectively through experience replay. In the acting stage, the most effective experience obtained during the interaction with the environment will be stored and during the learning stage, the stored experience will be replayed to customized times, which helps enhance the experience replay process.The effectiveness of proposed approach will be verified by simulation examples.

AB - Reinforcement Learning (RL) based control algorithms can learn the control strategies for nonlinear and uncertain environment during interacting with it. Guided by the rewards generated by environment, a RL agent can learn the control strategy directly in a model-free way instead of investigating the dynamic model of the environment. In the paper, we propose the sampled-data RL control strategy to reduce the computational demand. In the sampled-data control strategy, the whole control system is of a hybrid structure, in which the plant is of continuous structure while the controller (RL agent) adopts a discrete structure. Given that the continuous states of the plant will be the input of the agent, the state–action value function is approximated by the fully connected feed-forward neural networks (FCFFNN). Instead of learning the controller at every step during the interaction with the environment, the learning and acting stages are decoupled to learn the control strategy more effectively through experience replay. In the acting stage, the most effective experience obtained during the interaction with the environment will be stored and during the learning stage, the stored experience will be replayed to customized times, which helps enhance the experience replay process.The effectiveness of proposed approach will be verified by simulation examples.

KW - Reinforcement learning

KW - Neural networks

KW - Sampled-data control

KW - Model-free

KW - Effective experience replay

U2 - 10.1016/j.jai.2023.100018

DO - 10.1016/j.jai.2023.100018

M3 - Journal article

VL - 2

JO - Journal of Automation and Intelligence

JF - Journal of Automation and Intelligence

SN - 2949-8554

IS - 1

ER -