Spectral-Risk Safe Reinforcement Learning with Convergence Guarantees

Spectral-Risk Safe Reinforcement Learning with Convergence Guarantees

Dohyeong Kim1, Taehyun Cho1, Seungyub Han1, Hojun Chung1, Kyungjae Lee2, Songhwai Oh1,
1Seoul National University, 2Korea University
NeurIPS 2024

Abstract

The field of risk-constrained reinforcement learning (RCRL) has been developed to effectively reduce the likelihood of worst-case scenarios by explicitly handling risk-measure-based constraints. However, the nonlinearity of risk measures makes it challenging to achieve convergence and optimality. To overcome the difficulties posed by the nonlinearity, we propose a spectral risk measure-constrained RL algorithm, spectral-risk-constrained policy optimization (SRCPO), a bilevel optimization approach that utilizes the duality of spectral risk measures. In the bilevel optimization structure, the outer problem involves optimizing dual variables derived from the risk measures, while the inner problem involves finding an optimal policy given these dual variables. The proposed method, to the best of our knowledge, is the first to guarantee convergence to an optimum in the tabular setting. Furthermore, the proposed method has been evaluated on continuous control tasks and showed the best performance among other RCRL algorithms satisfying the constraints. Our code is available at https://github.com/rllab-snu/Spectral-Risk-Constrained-RL.

BibTeX

@inproceedings{
  kim2024srcpo,
  title={Spectral-Risk Safe Reinforcement Learning with Convergence Guarantees},
  author={Dohyeong Kim, Taehyun Cho, Seungyub Han, Hojun Chung, Kyungjae Lee, Songhwai Oh},
  booktitle={Thirty-eighth Conference on Neural Information Processing Systems},
  year={2024},
  url={https://openreview.net/forum?id=9JFSJitKC0}  
}