Publication Type

Conference Proceeding Article

Version

acceptedVersion

Publication Date

1-2021

Abstract

Learning attacker behavior is an important research topic in security games as security agencies are often uncertain about attackers’ decision making. Previous work has focused on developing various behavioral models of attackers based on historical attack data. However, a clever attacker can manipulate its attacks to fail such attack-driven learning, leading to ineffective defense strategies. We study attacker behavior deception with three main contributions. First, we propose a new model, named partial behavior deception model, in which there is a deceptive attacker (among multiple attackers) who controls a portion of attacks. Our model captures real-world security scenarios such as wildlife protection in which multiple poachers are present. Second, we introduce a new scalable algorithm, GAMBO, to compute an optimal deception strategy of the deceptive attacker. Our algorithm employs the projected gradient descent and uses the implicit function theorem for the computation of gradient. Third, we conduct a comprehensive set of experiments, showing a significant benefit for the attacker and loss for the defender due to attacker deception.

Keywords

Agent-based and Multi-agent Systems: Algorithmic Game Theory, Agent-based and Multi-agent Systems: Noncooperative Games, Machine Learning: Adversarial Machine Learning

Discipline

Artificial Intelligence and Robotics | Theory and Algorithms

Research Areas

Intelligent Systems and Optimization

Publication

Proceedings of 29th International Joint Conference on Artificial Intelligence (IJCAI), Virtual Conference, 2021 January 7-15

First Page

283

Last Page

289

Identifier

10.24963/ijcai.2020/40

City or Country

Virtual Conference

Additional URL

https://www.ijcai.org/Proceedings/2020/40

Share

COinS