Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

8-2024

Abstract

Policy-Space Response Oracles (PSRO) as a general algorithmic framework has achieved state-of-the-art performance in learning equilibrium policies of two-player zero-sum games. However, the hand-crafted hyperparameter value selection in most of the existing works requires extensive domain knowledge, forming the main barrier to applying PSRO to different games. In this work, we make the first attempt to investigate the possibility of self-adaptively determining the optimal hyperparameter values in the PSRO framework. Our contributions are three-fold: (1) Using several hyperparameters, we propose a parametric PSRO that unifies the gradient descent ascent (GDA) and different PSRO variants. (2) We propose the self-adaptive PSRO (SPSRO) by casting the hyperparameter value selection of the parametric PSRO as a hyperparameter optimization (HPO) problem where our objective is to learn an HPO policy that can self-adaptively determine the optimal hyperparameter values during the running of the parametric PSRO. (3) To overcome the poor performance of online HPO methods, we propose a novel offline HPO approach to optimize the HPO policy based on the Transformer architecture. Experiments on various two-player zero-sum games demonstrate the superiority of SPSRO over different baselines.

Keywords

Equilibrium policies learning, Policy-Space Response Oracles framework, Hyperparameter values pptimization

Discipline

Artificial Intelligence and Robotics | Computer Sciences

Research Areas

Data Science and Engineering; Intelligent Systems and Optimization

Publication

Proceedings of the 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024) : Jeju, South Korea, August 3-9

First Page

139

Last Page

147

Identifier

10.24963/ijcai.2024/16

Publisher

IJCAI

City or Country

Jeju, South Korea

Additional URL

https://doi.org/10.24963/ijcai.2024/16

Share

COinS