Publication Type

Journal Article

Version

acceptedVersion

Publication Date

5-2011

Abstract

In most kernel based online learning algorithms, when an incoming instance is misclassified, it will be added into the pool of support vectors and assigned with a weight, which often remains unchanged during the rest of the learning process. This is clearly insufficient since when a new support vector is added, we generally expect the weights of the other existing support vectors to be updated in order to reflect the influence of the added support vector. In this paper, we propose a new online learning method, termed Double Updating Online Learning, or DUOL for short, that explicitly addresses this problem. Instead of only assigning a fixed weight to the misclassified example received at the current trial, the proposed online learning algorithm also tries to update the weight for one of the existing support vectors. We show that the mistake bound can be improved by the proposed online learning method. We conduct an extensive set of empirical evaluations for both binary and multi-class online learning tasks. The experimental results show that the proposed technique is considerably more effective than the state-of-the-art online learning algorithms. The source code is available to public at http://www.cais.ntu.edu.sg/~chhoi/DUOL/.

Keywords

online learning, kernel method, support vector machines, maximum margin learning, classification

Discipline

Computer Sciences | Databases and Information Systems | Theory and Algorithms

Research Areas

Data Science and Engineering

Publication

Journal of Machine Learning Research

Volume

12

First Page

1587

Last Page

1615

ISSN

1532-4435

Publisher

JMLR

Copyright Owner and License

Authors

Additional URL

http://www.jmlr.org/papers/volume12/zhao11a/zhao11a.pdf

Share

COinS