Publication Type
Conference Proceeding Article
Version
acceptedVersion
Publication Date
1-2015
Abstract
Learning for maximizing AUC performance is an important research problem in machine learning. Unlike traditional batch learning methods for maximizing AUC which often suffer from poor scalability, recent years have witnessed some emerging studies that attempt to maximize AUC by single-pass online learning approaches. Despite their encouraging results reported, the existing online AUC maximization algorithms often adopt simple stochastic gradient descent approaches, which fail to exploit the geometry knowledge of the data observed in the online learning process, and thus could suffer from relatively slow convergence. To overcome the limitation of the existing studies, in this paper, we propose a novel algorithm of Adaptive Online AUC Maximization (AdaOAM), by applying an adaptive gradient method for exploiting the knowledge of historical gradients to perform more informative online learning. The new adaptive updating strategy by AdaOAM is less sensitive to parameter settings due to its natural effect of tuning the learning rate. In addition, the time complexity of the new algorithm remains the same as the previous non-adaptive algorithms. To demonstrate the effectiveness of the proposed algorithm, we analyze its theoretical bound, and further evaluate its empirical performance on both public benchmark datasets and anomaly detection datasets. The encouraging empirical results clearly show the effectiveness and efficiency of the proposed algorithm.
Keywords
Adaptive algorithms, Adaptive gradient methods, Benchmark datasets, Effectiveness and efficiencies, Empirical performance, Nonadaptive algorithm, Simple stochastic, Theoretical bounds, Updating strategy
Discipline
Computer Sciences | Databases and Information Systems | Theory and Algorithms
Research Areas
Data Science and Engineering
Publication
Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence: January 25-30, 2015, Austin
First Page
2568
Last Page
2574
ISBN
9781577357025
Publisher
AAAI Press
City or Country
Palo Alto, CA
Citation
DING, Yi; ZHAO, Peilin; HOI, Steven C. H.; and ONG, Yew-Soon.
An adaptive gradient method for online AUC maximization. (2015). Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence: January 25-30, 2015, Austin. 2568-2574.
Available at: https://ink.library.smu.edu.sg/sis_research/2638
Copyright Owner and License
Authors
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://www.aaai.org/ocs/index.php/AAAI/AAAI15/paper/view/9500