Publication Type

Conference Proceeding Article

Version

acceptedVersion

Publication Date

4-2022

Abstract

Federated learning is a new learning paradigm that jointly trains a model from multiple data sources without sharing raw data. For the practical deployment of federated learning, data source selection is compulsory due to the limited communication cost and budget in real-world applications. The necessity of data source selection is further amplified in presence of data heterogeneity among clients. Prior solutions are either low in efficiency with exponential time cost or lack theoretical guarantees. Inspired by the diminishing marginal accuracy phenomenon in federated learning, we study the problem from the perspective of submodular optimization. In this paper, we aim at efficient data source selection with theoretical guarantees. We prove that data source selection in federated learning is a monotone submodular maximization problem and propose FDSS, an efficient algorithm with a constant approximate ratio. Furthermore, we extend FDSS to FDSS-d for dynamic data source selection. Extensive experiments on CIFAR10 and CIFAR100 validate the efficiency and effectiveness of our algorithms.

Keywords

Federated learning, Data source selection, Submodularity

Discipline

Databases and Information Systems | Software Engineering

Research Areas

Software and Cyber-Physical Systems

Publication

Database Systems for Advanced Applications: 27th International Conference, DASFAA 2022, Virtual Conference, April 11-14: Proceedings

Volume

13246

First Page

606

Last Page

614

ISBN

9783031001253

Identifier

10.1007/978-3-031-00126-0_43

Publisher

Springer

City or Country

Cham

Copyright Owner and License

Authors

Additional URL

https://doi.org/10.1007/978-3-031-00126-0_43

Share

COinS