Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

7-2018

Abstract

We present a class of methods for robust, personalized federated learning, called Fed+, that unifies many federated learning algorithms. The principal advantage of this class of methods is to better accommodate the real-world characteristics found in federated training, such as the lack of IID data across parties, the need for robustness to outliers or stragglers, and the requirement to perform well on party-specific datasets. We achieve this through a problem formulation that allows the central server to employ robust ways of aggregating the local models while keeping the structure of local computation intact. Without making any statistical assumption on the degree of heterogeneity of local data across parties, we provide convergence guarantees for Fed+ for convex and non-convex loss functions under different (robust) aggregation methods. The Fed+ theory is also equipped to handle heterogeneous computing environments including stragglers without additional assumptions; specifically, the convergence results cover the general setting where the number of local update steps across parties can vary. We demonstrate the benefits of Fed+ through extensive experiments across standard benchmark datasets.

Keywords

classification, online learning, public transport, real-time estimation, unsupervised models

Discipline

Numerical Analysis and Computation

Research Areas

Intelligent Systems and Optimization

Areas of Excellence

Digital transformation

Publication

Proceedings of the 2022 IEEE International Conference on Edge Computing and Communications (EDGE), Barcelona, Spain, July 10-16

First Page

640

Last Page

647

ISBN

9781538692882

Identifier

10.1109/ICDMW.2018.00099

Publisher

IEEE Computer Society

City or Country

Los Alamitos, CA

Additional URL

https://doi.org/10.1109/ICDMW.2018.00099

Share

COinS