Publication Type
Conference Proceeding Article
Version
acceptedVersion
Publication Date
8-2024
Abstract
Nash Equilibrium (NE) is the canonical solution concept of game theory, which provides an elegant tool to understand the rationalities. Though mixed strategy NE exists in any game with finite players and actions, computing NE in two- or multi-player general-sum games is PPAD-Complete. Various alternative solutions, e.g., Correlated Equilibrium (CE), and learning methods, e.g., fictitious play (FP), are proposed to approximate NE. For convenience, we call these methods as "inexact solvers", or "solvers" for short. However, the alternative solutions differ from NE and the learning methods generally fail to converge to NE. Therefore, in this work, we propose REinforcement Nash Equilibrium Solver (RENES), which trains a single policy to modify the games with different sizes and applies the solvers on the modified games where the obtained solution is evaluated on the original games. Specifically, our contributions are threefold. i) We represent the games as α-rank response graphs and leverage graph neural network (GNN) to handle the games with different sizes as inputs; ii) We use tensor decomposition, e.g., canonical polyadic (CP), to make the dimension of modifying actions fixed for games with different sizes; iii) We train the modifying strategy for games with the widely-used proximal policy optimization (PPO) and apply the solvers to solve the modified games, where the obtained solution is evaluated on original games. Extensive experiments on large-scale normal-form games show that our method can further improve the approximation of NE of different solvers, i.e., α-rank, CE, FP and PRD, and can be generalized to unseen games.
Keywords
Nash equilibrium, game theory, reinforcement learning, REinforcement Nash Equilibrium Solver (RENES), graph neural networks, tensor decomposition, proximal policy optimization, α-rank, correlated equilibrium, fictitious play
Discipline
Artificial Intelligence and Robotics
Research Areas
Intelligent Systems and Optimization
Publication
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence (IJCAI-24)
First Page
265
Last Page
273
Identifier
10.24963/ijcai.2024/30
Publisher
IJCAI
City or Country
Jeju, South Korea
Citation
WANG, Xinrun; YANG, Chang; LI, Shuxin; LI, Pengdeng; HUANG, Xiao; CHAN, Hau; and AN, Bo.
Reinforcement Nash Equilibrium Solver. (2024). Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence (IJCAI-24). 265-273.
Available at: https://ink.library.smu.edu.sg/sis_research/9878
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
http://doi.org/10.24963/ijcai.2024/30