Publication Type
Conference Proceeding Article
Version
acceptedVersion
Publication Date
5-2024
Abstract
Reinforcement learning (RL) makes an agent learn from trial-and-error experiences gathered during the interaction with the environment. Recently, offline RL has become a popular RL paradigm because it saves the interactions with environments. In offline RL, data providers share large pre-collected datasets, and others can train high-quality agents without interacting with the environments. This paradigm has demonstrated effectiveness in critical tasks like robot control, autonomous driving, etc. However, less attention is paid to investigating the security threats to the offline RL system. This paper focuses on backdoor attacks, where some perturbations are added to the data (observations) such that given normal observations, the agent takes high-rewards actions, and low-reward actions on observations injected with triggers. In this paper, we propose Baffle (Backdoor Attack for Offline Reinforcement Learning), an approach that automatically implants backdoors to RL agents by poisoning the offline RL dataset, and evaluate how different offline RL algorithms react to this attack. Our experiments conducted on four tasks and nine offline RL algorithms expose a disquieting fact: none of the existing offline RL algorithms has been immune to such a backdoor attack. More specifically, Baffle modifies 10% of the datasets for four tasks (3 robotic controls and 1 autonomous driving). Agents trained on the poisoned datasets perform well in normal settings. However, when triggers are presented, the agents’ performance decreases drastically by 63.2%, 53.9%, 64.7%, and 47.4% in the four tasks on average. The backdoor still persists after fine-tuning poisoned agents on clean datasets. We further show that the inserted backdoor is also hard to be detected by a popular defensive method. This paper calls attention to developing more effective protection for the open-source offline RL dataset.
Keywords
Offline reinforcement learning, Backdoor attack, Dataset security threats
Discipline
Information Security
Publication
Proceedings of the 45th IEEE Symposium on Security and Privacy (SP 2024) : San Francisco, CA, USA, May 20-23
First Page
2086
Last Page
2104
Identifier
10.1109/SP54263.2024.00224
Publisher
IEEE
City or Country
San Francisco, USA
Citation
GONG, Chen; YANG, Zhou; BAI, Yunpeng; HE, Junda; SHI, Jieke; LI, Kecen; SINHA, Arunesh; XU, Bowen; HOU, Xinwen; David LO; and WANG, Tianhao.
Baffle : Hiding backdoors in offline reinforcement learning datasets. (2024). Proceedings of the 45th IEEE Symposium on Security and Privacy (SP 2024) : San Francisco, CA, USA, May 20-23. 2086-2104.
Available at: https://ink.library.smu.edu.sg/sis_research/9887
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1109/SP54263.2024.00224