FDFL: Fair and discrepancy-aware incentive mechanism for federated learning

Publication Type

Journal Article

Publication Date

7-2024

Abstract

Federated Learning (FL) is an emerging distributed machine learning paradigm crucial for ensuring privacy-preserving learning. In FL, a fair incentive mechanism is indispensable for inspiring more clients to participate in FL training. Nevertheless, achieving a fair incentive mechanism in FL is an arduous endeavor, underscored by two significant challenges that persistently elude resolution within existing methodologies. Firstly, existing works overlook the issue of category distribution heterogeneity in contribution evaluation, leading to incomplete contribution evaluations. Secondly, the fact that malicious servers will dishonestly allocate rewards to save costs is not considered in existing work, which can be a barrier to client participation in FL. This paper introduces FDFL (Fair and Discrepancy-aware incentive mechanism for Federated Learning), a novel system addressing these concerns. FDFL encompasses two key elements: 1) Discrepancy-aware contribution evaluation approach; 2) Provable reward allocation approach. Extensive experiments on four model-dataset combinations demonstrate that, under the heterogeneous setting, our scheme improves accuracy by an average of 9.85% and 11.97% compared to FedAvg and FAIR, respectively.

Keywords

Federated learning, Incentive mechanism, Contribution evaluation, Trusted execution environment

Discipline

Information Security

Research Areas

Cybersecurity

Publication

IEEE Transactions on Information Forensics and Security

ISSN

1556-6013

Identifier

10.1109/TIFS.2024.3433537

Publisher

Institute of Electrical and Electronics Engineers

Additional URL

https://doi.org/10.1109/TIFS.2024.3433537

This document is currently not available here.

Share

COinS