Publication Type

Journal Article

Version

acceptedVersion

Publication Date

9-2006

Abstract

In this paper, we propose the multi-learner based recursive supervised training (MLRT) algorithm, which uses the existing framework of recursive task decomposition, by training the entire dataset, picking out the best learnt patterns, and then repeating the process with the remaining patterns. Instead of having a single learner to classify all datasets during each recursion, an appropriate learner is chosen from a set of three learners, based on the subset of data being trained, thereby avoiding the time overhead associated with the genetic algorithm learner utilized in previous approaches. In this way MLRT seeks to identify the inherent characteristics of the dataset, and utilize it to train the data accurately and efficiently. We observed that empirically MLRT performs considerably well as compared with RPHP and other systems on benchmark data with 11% improvement in accuracy on the SPAM dataset and comparable performances on the VOWEL and the TWO-SPIRAL problems. In addition, for most datasets, the time taken by MLRT is considerably lower than that of the other systems with comparable accuracy. Two heuristic versions, MLRT-2 and MLRT-3, are also introduced to improve the efficiency in the system, and to make it more scalable for future updates. The performance in these versions is similar to the original MLRT system.

Keywords

Neural Networks, Supervised Learning, Probabilistic Neural Networks (PNN), Backpropagation

Discipline

Artificial Intelligence and Robotics | Numerical Analysis and Scientific Computing

Publication

International Journal of Computational Intelligence and Applications

Volume

6

Issue

3

First Page

429

Last Page

449

ISSN

1469-0268

Identifier

10.1142/S1469026806001861

Publisher

World Scientific Publishing

Copyright Owner and License

Authors

Additional URL

https://doi.org/10.1142/S1469026806001861

Share

COinS