Publication Type
Journal Article
Version
acceptedVersion
Publication Date
5-2011
Abstract
In most kernel based online learning algorithms, when an incoming instance is misclassified, it will be added into the pool of support vectors and assigned with a weight, which often remains unchanged during the rest of the learning process. This is clearly insufficient since when a new support vector is added, we generally expect the weights of the other existing support vectors to be updated in order to reflect the influence of the added support vector. In this paper, we propose a new online learning method, termed Double Updating Online Learning, or DUOL for short, that explicitly addresses this problem. Instead of only assigning a fixed weight to the misclassified example received at the current trial, the proposed online learning algorithm also tries to update the weight for one of the existing support vectors. We show that the mistake bound can be improved by the proposed online learning method. We conduct an extensive set of empirical evaluations for both binary and multi-class online learning tasks. The experimental results show that the proposed technique is considerably more effective than the state-of-the-art online learning algorithms. The source code is available to public at http://www.cais.ntu.edu.sg/~chhoi/DUOL/.
Keywords
online learning, kernel method, support vector machines, maximum margin learning, classification
Discipline
Computer Sciences | Databases and Information Systems | Theory and Algorithms
Research Areas
Data Science and Engineering
Publication
Journal of Machine Learning Research
Volume
12
First Page
1587
Last Page
1615
ISSN
1532-4435
Publisher
JMLR
Citation
ZHAO, Peilin; HOI, Steven C. H.; and JIN, Rong.
Double Updating Online Learning. (2011). Journal of Machine Learning Research. 12, 1587-1615.
Available at: https://ink.library.smu.edu.sg/sis_research/2290
Copyright Owner and License
Authors
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
http://www.jmlr.org/papers/volume12/zhao11a/zhao11a.pdf