Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

5-2024

Abstract

We consider the problem of finding second-order stationary points in the optimization of heterogeneous federated learning (FL). Previous works in FL mostly focus on first-order convergence guarantees, which do not rule out the scenario of unstable saddle points. Meanwhile, it is a key bottleneck of FL to achieve communication efficiency without compensating the learning accuracy, especially when local data are highly heterogeneous across different clients. Given this, we propose a novel algorithm PowerEF-SGD that only communicates compressed information via a novel error-feedback scheme. To our knowledge, PowerEF-SGD is the first distributed and compressed SGD algorithm that provably escapes saddle points in heterogeneous FL without any data homogeneity assumptions. In particular, PowerEF-SGD improves to second-order stationary points after visiting first-order (possibly saddle) points, using additional gradient queries and communication rounds only of almost the same order required by first-order convergence, and the convergence rate shows a linear-speedup pattern in terms of the number of workers. Our theory improves/recovers previous results, while extending to much more tolerant settings on the local data. Numerical experiments are provided to complement the theory.

Keywords

Federated learning, Machine learning, Communication compression

Discipline

Artificial Intelligence and Robotics

Research Areas

Data Science and Engineering; Intelligent Systems and Optimization

Publication

Proceedings of the 27th International Conference on Artificial Intelligence and Statistics PMLR

Volume

238

First Page

1

Last Page

26

Identifier

https://proceedings.mlr.press/v238/chen24d.html

Publisher

Proceedings of Machine Learning Research

City or Country

Valencia

Share

COinS