Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

8-2019

Abstract

Majorization-Minimization (MM) algorithms optimize an objective function by iteratively minimizing its majorizing surrogate and offer attractively fast convergence rate for convex problems. However, their convergence behaviors for non-convex problems remain unclear. In this paper, we propose a novel MM surrogate function from strictly upper bounding the objective to bounding the objective in expectation. With this generalized surrogate conception, we develop a new optimization algorithm, termed SPI-MM, that leverages the recent proposed SPIDER for more efficient non-convex optimization. We prove that for finite-sum problems, the SPI-MM algorithm converges to an stationary point within deterministic and lower stochastic gradient complexity. To our best knowledge, this work gives the first non-asymptotic convergence analysis for MM-alike algorithms in general non-convex optimization. Extensive empirical studies on non-convex logistic regression and sparse PCA demonstrate the advantageous efficiency of the proposed algorithm and validate our theoretical results.

Keywords

Machine Learning: Data Mining, Computer Vision: Big Data and Large Scale Methods

Discipline

Graphics and Human Computer Interfaces

Research Areas

Intelligent Systems and Optimization

Areas of Excellence

Digital transformation

Publication

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, Macao, China, 2019 August 10-16

First Page

4257

Last Page

4263

Identifier

10.24963/IJCAI.2019/591

Publisher

IJCAI

City or Country

Macao, China

Additional URL

https://doi.org/10.24963/IJCAI.2019/591

Share

COinS