Self-organizing neural architecture for reinforcement learning

Ah-hwee TAN, Singapore Management University

Abstract

Self-organizing neural networks are typically associated with unsupervised learning. This paper presents a self-organizing neural architecture, known as TD-FALCON, that learns cognitive codes across multi-modal pattern spaces, involving sensory input, actions, and rewards, and is capable of adapting and functioning in a dynamic environment with external evaluative feedback signals. We present a case study of TD-FALCON on a mine avoidance and navigation cognitive task, and illustrate its performance by comparing with a state-of-the-art reinforcement learning approach based on gradient descent backpropagation algorithm