Publication Type

Conference Proceeding Article

Version

acceptedVersion

Publication Date

8-2020

Abstract

Anderson acceleration (or Anderson mixing) is an efficient acceleration method for fixed point iterations $x_{t+1}=G(x_t)$, e.g., gradient descent can be viewed as iteratively applying the operation $G(x) \triangleq x-\alpha\nabla f(x)$. It is known that Anderson acceleration is quite efficient in practice and can be viewed as an extension of Krylov subspace methods for nonlinear problems. In this paper, we show that Anderson acceleration with Chebyshev polynomial can achieve the optimal convergence rate $O(\sqrt{\kappa}\ln\frac{1}{\epsilon})$, which improves the previous result $O(\kappa\ln\frac{1}{\epsilon})$ provided by (Toth and Kelley, 2015) for quadratic functions. Moreover, we provide a convergence analysis for minimizing general nonlinear problems. Besides, if the hyperparameters (e.g., the Lipschitz smooth parameter $L$) are not available, we propose a guessing algorithm for guessing them dynamically and also prove a similar convergence rate. Finally, the experimental results demonstrate that the proposed Anderson-Chebyshev acceleration method converges significantly faster than other algorithms, e.g., vanilla gradient descent (GD), Nesterov's Accelerated GD. Also, these algorithms combined with the proposed guessing algorithm (guessing the hyperparameters dynamically) achieve much better performance.

Discipline

Databases and Information Systems

Research Areas

Data Science and Engineering; Intelligent Systems and Optimization

Publication

Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS 2020), Virtual Conference, August 26-28

First Page

1

Last Page

17

Publisher

Proceedings of Machine Learning Research

City or Country

Virtual Conference

Additional URL

https://proceedings.mlr.press/v108/li20d.html

Share

COinS