Home > Research > Publications & Outputs > Hybrid Safe Reinforcement Learning

Electronic data

Links

Text available via DOI:

View graph of relations

Hybrid Safe Reinforcement Learning: Tackling Distribution Shift and Outliers with the Student-t’s Process

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
Article number129912
<mark>Journal publication date</mark>14/06/2025
<mark>Journal</mark>Neurocomputing
Volume634
Number of pages15
Publication StatusPublished
Early online date11/03/25
<mark>Original language</mark>English

Abstract

Safe reinforcement learning (SRL) aims to optimize control policies that maximise long-term reward, while adhering to safety constraints. SRL has many real-world applications such as, autonomous vehicles, industrial robotics, and healthcare. Recent advances in offline reinforcement learning (RL) - where agents learn policies from static datasets without interacting with the environment - have made it a promising approach to derive safe control policies. However, offline RL faces significant challenges, such as covariate shift and outliers in the data, which can lead to suboptimal policies. Similarly, online SRL, which derives safe policies through real-time environment interaction, struggles with outliers and often relies on unrealistic regularity assumptions, limiting its practicality. This paper addresses these challenges by proposing a hybrid-offline-online approach. First, prior knowledge from offline learning guides online exploration. Then, during online learning, we replace the popular Gaussian Process (GP) with the Student-t's Process (TP) to enhance robustness to covariate shift and outliers.