Publication Type
Conference Proceeding Article
Version
acceptedVersion
Publication Date
12-2019
Abstract
We propose a novel randomized incremental gradient algorithm, namely, VAriance-Reduced Accelerated Gradient (Varag), for finite-sum optimization. Equipped with a unified step-size policy that adjusts itself to the value of the conditional number, Varag exhibits the unified optimal rates of convergence for solving smooth convex finite-sum problems directly regardless of their strong convexity. Moreover, Varag is the first accelerated randomized incremental gradient method that benefits from the strong convexity of the data-fidelity term to achieve the optimal linear convergence. It also establishes an optimal linear rate of convergence for solving a wide class of problems only satisfying a certain error bound condition rather than strong convexity. Varag can also be extended to solve stochastic finite-sum problems.
Discipline
Databases and Information Systems
Research Areas
Data Science and Engineering; Intelligent Systems and Optimization
Publication
Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada, December 8-14
First Page
1
Last Page
24
Publisher
Neural Information Processing Systems Foundation
City or Country
Vancouver, Canada
Citation
LAN, Guanghui; LI, Zhize; and ZHOU, Yi.
A unified variance-reduced accelerated gradient method for convex optimization. (2019). Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada, December 8-14. 1-24.
Available at: https://ink.library.smu.edu.sg/sis_research/8678
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://arxiv.org/pdf/1905.12412.pdf