Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

5-2024

Abstract

Nash Equilibrium (NE) is the canonical solution concept of game theory, which provides an elegant tool to understand the rationalities. Computing NE in two- or multi-player general-sum games is PPAD-Complete. Therefore, in this work, we propose REinforcement Nash Equilibrium Solver (RENES), which trains a single policy to modify the games with different sizes and applies the solvers on the modified games where the obtained solution is evaluated on the original games. Specifically, our contributions are threefold. i) We represent the games as ��-rank response graphs and leverage graph neural network (GNN) to handle the games with different sizes as inputs; ii) We use tensor decomposition, e.g., canonical polyadic (CP), to make the dimension of modifying actions fixed for games with different sizes; iii) We train the modifying strategy for games with the widely-used proximal policy optimization (PPO) and apply the solvers to solve the modified games, where the obtained solution is evaluated on original games. Extensive experiments on large-scale normal-form games show that our method can further improve the approximation of NE of different solvers, i.e., ��-rank, CE, FP and PRD, and can be generalized to unseen games.

Keywords

Game Theory, Generalizability, Reinforcement Learning

Discipline

Artificial Intelligence and Robotics | Theory and Algorithms

Research Areas

Intelligent Systems and Optimization

Areas of Excellence

Digital transformation

Publication

AAMAS '24: Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems, Auckland, May 6-10

First Page

2552

Last Page

2554

Publisher

IFAAMAS

City or Country

Richland, SC

Creative Commons License

Creative Commons Attribution 3.0 License
This work is licensed under a Creative Commons Attribution 3.0 License.

Share

COinS