Publication Type
Conference Proceeding Article
Version
acceptedVersion
Publication Date
7-2018
Abstract
Exploration is essential in reinforcement learning, which expands the search space of potential solutions to a given problem for performance evaluations. Specifically, carefully designed exploration strategy may help the agent learn faster by taking the advantage of what it has learned previously. However, many reinforcement learning mechanisms still adopt simple exploration strategies, which select actions in a pure random manner among all the feasible actions. In this paper, we propose novel mechanisms to improve the existing knowledgebased exploration strategy based on a probabilistic guided approach to select actions. We conduct extensive experiments in a Minefield navigation simulator and the results show that our proposed probabilistic guided exploration approach significantly improves the convergence rate.
Keywords
Reinforcement learning, self-organizing neural networks, guided exploration
Discipline
Databases and Information Systems | OS and Networks
Research Areas
Data Science and Engineering
Publication
Proceedings of 2018 IEEE International Conference on Agents, ICA, Singapore, July 28-31
First Page
109
Last Page
112
Identifier
10.1109/AGENTS.2018.8460067
Publisher
IEEE
City or Country
Singapore
Citation
1
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1109/AGENTS.2018.8460067