FIFL: A Fair Incentive Mechanism for Federated Learning
Liang Gao
National University of Defense
Technology
China
Li Li*
State Key Laboratory of IoTSC,
University of Macau
China
Yingwen Chen*
National University of Defense
Technology
China
Wenli Zheng*
Shanghai Jiaotong University
China
ChengZhong Xu
State Key Laboratory of IoTSC,
University of Macau
China
Ming Xu
National University of Defense
Technology
China
ABSTRACT
Federated learning is a novel machine learning framework that
enables multiple devices to collaboratively train high-performance
models while preserving data privacy. Federated learning is a kind
of crowdsourcing computing, where a task publisher shares prot
with workers to utilize their data and computing resources. Intu-
itively, devices have no interest to participate in training without
rewards that match their expended resources. In addition, guarding
against malicious workers is also essential because they may up-
load meaningless updates to get undeserving rewards or damage
the global model. In order to eectively solve these problems, we
propose FIFL, a fair incentive mechanism for federated learning.
FIFL rewards workers fairly to attract reliable and ecient ones
while punishing and eliminating the malicious ones based on a
dynamic real-time worker assessment mechanism. We evaluate
the eectiveness of FIFL through theoretical analysis and compre-
hensive experiments. The evaluation results show that FIFL fairly
distributes rewards according to workers’ behaviour and quality.
FIFL increases the system revenue by 0
.
2% to 3
.
4% in reliable federa-
tions compared with baselines. In the unreliable scenario containing
attackers which destroy the model’s performance, the system rev-
enue of FIFL outperforms the baselines by more than 46.7%.
KEYWORDS
federated learning; incentive mechanism; attack detection
ACM Reference Format:
Liang Gao, Li Li*, Yingwen Chen*, Wenli Zheng*, ChengZhong Xu, and Ming
Xu. 2021. FIFL: A Fair Incentive Mechanism for Federated Learning. In 50th
International Conference on Parallel Processing (ICPP ’21), August 9–12, 2021,
Lemont, IL, USA. ACM, New York, NY, USA, 10 pages. https://doi.org/10.
1145/3472456.3472469
1 INTRODUCTION
Traditionally, the training data are collected and stored in the cloud
to train various machine learning models. However, data privacy
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than the
author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specic permission
and/or a fee. Request permissions from permissions@acm.org.
ICPP ’21, August 9–12, 2021, Lemont, IL, USA
© 2021 Copyright held by the owner/author(s). Publication rights licensed to ACM.
ACM ISBN 978-1-4503-9068-2/21/08.. . $15.00
https://doi.org/10.1145/3472456.3472469
[
10
,
15
] becomes increasingly important for users and governments.
Gathering the sensitive user data to a central server can cause
serious privacy issues. In order to well protect data privacy in the
training process, Federated learning (FL) [
30
] gains popularity as it
allows workers (e.g., mobile devices) to train models collaboratively
by only uploading the model gradients instead of raw data. Thus,
data privacy is well preserved in the whole training process.
In a typical commercialization scenario of FL, data owners get
rewards for sharing their data resources and computing resources.
However, if workers’ rewards do not match their corresponding
utility and expenses, they would quit the current training or join
other high-yield federation [
31
]. In addition, it is dicult to pre-
vent attackers and free-riders [
6
,
20
] without punishments. Existing
research has already suggested that attacks such as sign-ipping
attack [
29
] and data-poison attack [
22
] greatly damage model per-
formance. The free-riders and low-quality workers can bring less
revenue to the system but get larger rewards, which leads to a
decline in the system revenue of the whole system. Even so, the
previous incentive mechanisms [
26
,
32
] assume all the workers to
be honest, which ignore attackers and unstable workers [
12
] though
they have a huge impact on the system revenue and expenditure.
As attracting high-quality workers and guarding against potential
attackers are important for FL, a fair incentive mechanism that
tolerates Byzantine attack [28, 29] is a must-have.
There are two challenges in the FL incentive mechanisms. First,
how can we accurately and eciently identify the workers’ utilities
without heavy computation overhead? Second, how to ensure fair-
ness and reliability of the incentive mechanism under attacks and
deceptions? Since the workers’ utilities determine their rewards,
an ecient assessment method is essential for the incentive mech-
anism. However, workers’ raw data is strictly protected in FL, thus
the traditional assessment approaches that require direct contact
with the original data are not practical in FL scenario. It is unre-
liable even if workers run the assessment procedure themselves
to avoid privacy leakage because the results may be tampered [
6
].
Designing an ecient and reliable incentive mechanism for FL in
unreliable scenarios is challenging.
We propose FIFL, an incentive mechanism that characterizes the
assessment results of workers based on two indicators: 1) contri-
bution and 2) reputation. Contribution measures workers’ current
utility to the system, and reputation is the probability of workers
producing helpful updates in one period. FIFL combines contribu-
tion and reputation to decide how much to reward honest workers
(or punish attackers). Besides, FIFL adopts a blockchain-based audit