Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

12-2008

Abstract

TD-FALCON (Temporal Difference - Fusion Architecture for Learning, COgnition, and Navigation) is a class of self-organizing neural networks that incorporates Temporal Difference (TD) methods for real-time reinforcement learning. In this paper, we present two strategies, i.e. policy sharing and neighboring-agent mechanism, to further improve the learning efficiency of TD-FALCON in complex multi-agent domains. Through experiments on a traffic control problem domain and the herding task, we demonstrate that those strategies enable TD-FALCON to remain functional and adaptable in complex multi-agent domains

Discipline

Databases and Information Systems

Research Areas

Data Science and Engineering

Publication

Proceedings of the IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT'08), Australia, December 9-12

First Page

326

Last Page

329

Identifier

10.1109/WIIAT.2008.259

Publisher

IEEE

City or Country

New York

Additional URL

http://www.scopus.com/inward/record.url?eid=2-s2.0-62949217826&partnerID=MN8TOARS

Share

COinS