Publication Type

Conference Paper

Version

acceptedVersion

Publication Date

11-2013

Abstract

Organizations that collect and use large volumes of personal information often use security audits to protect data subjects from inappropriate uses of this information by authorized insiders. In face of unknown incentives of employees, a reasonable audit strategy for the organization is one that minimizes its regret. While regret minimization has been extensively studied in repeated games, the standard notion of regret for repeated games cannot capture the complexity of the interaction between the organization (defender) and an adversary, which arises from dependence of rewards and actions on history. To account for this generality, we introduce a richer class of games called bounded-memory games, which can provide a more accurate model of the audit process. We introduce the notion of k-adaptive regret, which compares the reward obtained by playing actions prescribed by the algorithm against a hypothetical k-adaptive adversary with the reward obtained by the best expert in hindsight against the same adversary. Roughly, a hypothetical k-adaptive adversary adapts her strategy to the defender’s actions exactly as the real adversary would within each window of k rounds. A k-adaptive adversary is a natural model for temporary adversaries (e.g., company employees) who stay for a certain number of audit cycles and are then replaced by a different person. Our definition is parameterized by a set of experts, which can include both fixed and adaptive defender strategies. We investigate the inherent complexity of and design algorithms for adaptive regret minimization in bounded-memory games of perfect and imperfect information. We prove a hardness result showing that, with imperfect information, any k-adaptive regret minimizing algorithm (with fixed strategies as experts) must be inefficient unless NP = RP even when playing against an oblivious adversary. In contrast, for bounded-memory games of perfect and imperfect information we present approximate 0-adaptive regret minimization algorithms against an oblivious adversary running in time nO(1)nO(1).

Keywords

Perfect Information, Imperfect Information, Stochastic Game, Repeated Game, Stackelberg Equilibrium

Discipline

Artificial Intelligence and Robotics

Research Areas

Data Science and Engineering

Publication

International Conference on Decision and Game Theory for Security, Fort Worth, TX, November 11-13

Identifier

10.1007/978-3-319-02786-9_5

Publisher

ACM

City or Country

Fort Worth, TX, USA

Additional URL

https://doi.org/10.1007/978-3-319-02786-9_5

Share

COinS