Rights statement: ©2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Accepted author manuscript, 1.12 MB, PDF document
Available under license: CC BY: Creative Commons Attribution 4.0 International License
Final published version
Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSN › Conference contribution/Paper › peer-review
Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSN › Conference contribution/Paper › peer-review
}
TY - GEN
T1 - Advanced Machine Learning Approach of Power Flow Optimization in Community Microgrid
AU - Aldahmashi, Jamal
AU - Ma, Xiandong
N1 - ©2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
PY - 2022/10/10
Y1 - 2022/10/10
N2 - With the increasing penetration of distributed renewable energy (DERs), the electrical grid is experiencing, on a daily basis, rapid and massive fluctuations in power and voltage profiles. Fast and precise control strategies in realtime have played an important role to ensure that the power system operates at an optimal status. Solving real-time optimal power flow (OPF) problems while satisfying the operational constraints of the community microgrid (CMG) is considered a promising technique to control the fluctuations of renewable sources and loads. This paper adopts a new deep reinforcement learning algorithm (DRL), called Twin-Delayed Deep Deterministic Policy Gradient (TD3), to solve the real-time OPF with consideration of DERs and distributed energy storages (DESs) in the CMG. Training and testing of the algorithm are conducted on an IEEE 14-bus test system. Comparative results show the effectiveness of the proposed algorithm.
AB - With the increasing penetration of distributed renewable energy (DERs), the electrical grid is experiencing, on a daily basis, rapid and massive fluctuations in power and voltage profiles. Fast and precise control strategies in realtime have played an important role to ensure that the power system operates at an optimal status. Solving real-time optimal power flow (OPF) problems while satisfying the operational constraints of the community microgrid (CMG) is considered a promising technique to control the fluctuations of renewable sources and loads. This paper adopts a new deep reinforcement learning algorithm (DRL), called Twin-Delayed Deep Deterministic Policy Gradient (TD3), to solve the real-time OPF with consideration of DERs and distributed energy storages (DESs) in the CMG. Training and testing of the algorithm are conducted on an IEEE 14-bus test system. Comparative results show the effectiveness of the proposed algorithm.
KW - Reinforcement Learning (RL)
KW - Deep Reinforcement Learning (DRL)
KW - Optimal Power Flow (OPF)
KW - Delayed Deep Deterministic Policy Gradient (TD3)
KW - Community Microgrid (CMG)
U2 - 10.1109/ICAC55051.2022.9911103
DO - 10.1109/ICAC55051.2022.9911103
M3 - Conference contribution/Paper
BT - Proceedings of the 27th International Conference on Automation & Computing (ICAC2022)
PB - IEEE
ER -