Publication Type
Conference Proceeding Article
Version
acceptedVersion
Publication Date
4-2025
Abstract
Deep Reinforcement Learning (DRL) policies are highly susceptible to adversarial noise in observations, which poses significant risks in safety-critical scenarios. The challenge inherent to adversarial perturbations is that by altering the information observed by the agent, the state becomes only partially observable. Existing approaches address this by either enforcing consistent actions across nearby states or maximizing the worst-case value within adversarially perturbed observations. However, the former suffers from performance degradation when attacks succeed, while the latter tends to be overly conservative, leading to suboptimal performance in benign settings. We hypothesize that these limitations stem from their failing to account for partial observability directly. To this end, we introduce a novel objective called Adversarial Counterfactual Error (ACoE), defined on the beliefs about the true state and balancing value optimization with robustness. To make ACoE scalable in model-free settings, we propose the theoretically-grounded surrogate objective Cumulative-ACoE (C-ACoE). Our empirical evaluations on standard benchmarks (MuJoCo, Atari, and Highway) demonstrate that our method significantly outperforms current state-of-the-art approaches for addressing adversarial RL challenges, offering a promising direction for improving robustness in DRL under adversarial conditions. Our code is available at https://github.com/romanbelaire/acoe-robust-rl.
Keywords
Reinforcement learning, robust reinforcement learning, adversarial robustness, partially observable markov decision problems
Discipline
Artificial Intelligence and Robotics
Research Areas
Intelligent Systems and Optimization
Areas of Excellence
Digital transformation
Publication
Proceedings of the Thirteenth International Conference on Learning Representations, ICLR 2025, Singapore, April 24-28
First Page
1
Last Page
25
Publisher
ICLR
City or Country
Singapore
Citation
BELAIRE, Roman; SINHA, Arunesh; and VARAKANTHAM, Pradeep.
On minimizing adversarial counterfactual error in adversarial reinforcement learning. (2025). Proceedings of the Thirteenth International Conference on Learning Representations, ICLR 2025, Singapore, April 24-28. 1-25.
Available at: https://ink.library.smu.edu.sg/sis_research/10744
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://openreview.net/forum?id=eUEMjwh5wK