Publication Type
Journal Article
Version
acceptedVersion
Publication Date
9-2016
Abstract
Non-Player-Characters (NPCs), as found in computer games, can be modelled as intelligent systems, which serve to improve the interactivity and playability of the games. Although reinforcement learning (RL) has been a promising approach to creating the behavior models of non-player characters (NPC), an initial stage of exploration and low performance is typically required. On the other hand, imitative learning (IL) is an effective approach to pre-building a NPC’s behavior model by observing the opponent’s actions, but learning by imitation limits the agent’s performance to that of its opponents. In view of their complementary strengths, this paper proposes a computational model unifying the two learning paradigms based on a class of self-organizing neural networks called Fusion Architecture for Learning and COgnition (FALCON). Specifically, two hybrid learning strategies, known as the Dual-Stage Learning (DSL) and the Mixed Model Learning (MML), are presented to realize the integration of the two distinct learning paradigms in one framework. The DSL and MML strategies have been applied to creating autonomous non-player characters (NPCs) in a first person shooting game named Unreal Tournament. Our experiments show that both DSL and MML are effective in producing NPCs with faster learning speed and better combat performance comparing with those built by traditional RL and IL methods. The proposed hybrid learning strategies thus provide an efficient method to building intelligent NPC agents in games and pave the way towards building autonomous expert and intelligent systems for other applications.
Keywords
behavior learning, reinforcement learning, imitative learning, self-organizing neural network, intelligent agent
Discipline
Databases and Information Systems | OS and Networks | Systems Architecture
Research Areas
Data Science and Engineering
Publication
Expert Systems with Applications
Volume
56
Issue
1
First Page
89
Last Page
99
ISSN
0957-4174
Identifier
10.1016/j.eswa.2016.02.043
Publisher
Elsevier
Citation
FENG, Shu and TAN, Ah-hwee.
Towards autonomous behavior learning of non-player characters in games. (2016). Expert Systems with Applications. 56, (1), 89-99.
Available at: https://ink.library.smu.edu.sg/sis_research/5247
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1016/j.eswa.2016.02.043
Included in
Databases and Information Systems Commons, OS and Networks Commons, Systems Architecture Commons