This is a benchmark for evaluating well-known federated learning (FL) and personalized federated learning (pFL) methods. This benchmark is not complicated and easy to extend.
📢 Note that FL-bench needs 3.10 <= python < 3.12
. I suggest you to checkout your python version before installing packages by using pip.
pip install -r requirements.txt
conda env create -n fl-bench -f environment.yml
########## Not at China mainland ##########
sed -i "26,30d" pyproject.toml
poetry lock --no-update
###########################################
poetry install
# At China mainland
docker build -t fl-bench .
# Not at China mainland
docker build \
-t fl-bench \
--build-arg IMAGE_SOURCE=karhou/ubuntu:basic \
--build-arg CHINA_MAINLAND=false \
.
-
FedAvg -- Communication-Efficient Learning of Deep Networks from Decentralized Data (AISTATS'17)
-
FedAvgM -- Measuring the Effects of Non-Identical Data Distribution for Federated Visual Classification (ArXiv'19)
-
FedProx -- Federated Optimization in Heterogeneous Networks (MLSys'20)
-
SCAFFOLD -- SCAFFOLD: Stochastic Controlled Averaging for Federated Learning (ICML'20)
-
MOON -- Model-Contrastive Federated Learning (CVPR'21)
-
FedDyn -- Federated Learning Based on Dynamic Regularization (ICLR'21)
-
FedLC -- Federated Learning with Label Distribution Skew via Logits Calibration (ICML'22)
-
FedGen -- Data-Free Knowledge Distillation for Heterogeneous Federated Learning (ICML'21)
-
pFedSim (My Work⭐) -- pFedSim: Similarity-Aware Model Aggregation Towards Personalized Federated Learning (ArXiv'23)
-
Local-Only -- Local training only (without communication).
-
FedMD -- FedMD: Heterogenous Federated Learning via Model Distillation (NIPS'19)
-
APFL -- Adaptive Personalized Federated Learning (ArXiv'20)
-
LG-FedAvg -- Think Locally, Act Globally: Federated Learning with Local and Global Representations (ArXiv'20)
-
FedBN -- FedBN: Federated Learning On Non-IID Features Via Local Batch Normalization (ICLR'21)
-
FedPer -- Federated Learning with Personalization Layers (AISTATS'20)
-
FedRep -- Exploiting Shared Representations for Personalized Federated Learning (ICML'21)
-
Per-FedAvg -- Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach (NIPS'20)
-
pFedMe -- Personalized Federated Learning with Moreau Envelopes (NIPS'20)
-
Ditto -- Ditto: Fair and Robust Federated Learning Through Personalization (ICML'21)
-
pFedHN -- Personalized Federated Learning using Hypernetworks (ICML'21)
-
pFedLA -- Layer-Wised Model Aggregation for Personalized Federated Learning (CVPR'22)
-
CFL -- Clustered Federated Learning: Model-Agnostic Distributed Multi-Task Optimization under Privacy Constraints (ArXiv'19)
-
FedFomo -- Personalized Federated Learning with First Order Model Optimization (ICLR'21)
-
FedBabu -- FedBabu: Towards Enhanced Representation for Federated Image Classification (ICLR'22)
-
FedAP -- Personalized Federated Learning with Adaptive Batchnorm for Healthcare (IEEE'22)
-
kNN-Per -- Personalized Federated Learning through Local Memorization (ICML'22)
-
MetaFed -- MetaFed: Federated Learning among Federations with Cyclic Knowledge Distillation for Personalized Healthcare (IJCAI'22)
More reproductions/features would come soon or later (depends on my mood 🤣).
ALL classes of methods are inherited from FedAvgServer
and FedAvgClient
. If you wanna figure out the entire workflow and detail of variable settings, go check ./src/server/fedavg.py
and ./src/client/fedavg.py
.
# partition the CIFAR-10 according to Dir(0.1) for 100 clients
cd data
python generate_data.py -d cifar10 -a 0.1 -cn 100
cd ../
# run FedAvg under default setting.
cd src/server
python fedavg.py
About methods of generating federated dastaset, go check data/README.md
for full details.
- Run
python -m visdom.server
on terminal. - Run
src/server/${algo}.py --visible 1
- Go check
localhost:8097
on your browser.
📢 All generic arguments have their default value. Go check get_fedavg_argparser()
in FL-bench/src/server/fedavg.py
for full details about generic arguments.
About the default values and hyperparameters of advanced FL methods, go check corresponding FL-bench/src/server/${algo}.py
for full details.
Argument | Description |
---|---|
--dataset |
The name of dataset that experiment run on. |
--model |
The model backbone experiment used. |
--seed |
Random seed for running experiment. |
--join_ratio |
Ratio for (client each round) / (client num in total). |
--global_epoch |
Global epoch, also called communication round. |
--local_epoch |
Local epoch for client local training. |
--finetune_epoch |
Epoch for clients fine-tunning their models before test. |
--test_gap |
Interval round of performing test on clients. |
--eval_test |
Non-zero value for performing evaluation on joined clients' testset before and after local training. |
--eval_train |
Non-zero value for performing evaluation on joined clients' trainset before and after local training. |
--local_lr |
Learning rate for client local training. |
--momentum |
Momentum for client local opitimizer. |
--weight_decay |
Weight decay for client local optimizer. |
--verbose_gap |
Interval round of displaying clients training performance on terminal. |
--batch_size |
Data batch size for client local training. |
--use_cuda |
Non-zero value indicates that tensors are in gpu. |
--visible |
Non-zero value for using Visdom to monitor algorithm performance on localhost:8097 . |
--global_testset |
Non-zero value for evaluating client models over the global testset before and after local training, instead of evaluating over clients own testset. The global testset is the union set of all client's testset. NOTE: Activating this setting will considerably slow down the entire training process, especially the dataset is big. |
--save_log |
Non-zero value for saving algorithm running log in FL-bench/out/${algo} . |
--straggler_ratio |
The ratio of stragglers (set in [0, 1] ). Stragglers would not perform full-epoch local training as normal clients. Their local epoch would be randomly selected from range [--straggler_min_local_epoch, --local_epoch) . |
--straggler_min_local_epoch |
The minimum value of local epoch for stragglers. |
--save_model |
Non-zero value for saving output model(s) parameters in FL-bench/out/${algo} . |
--save_fig |
Non-zero value for saving the accuracy curves showed on Visdom into a .jpeg file at FL-bench/out/${algo} . |
--save_metrics |
Non-zero value for saving metrics stats into a .csv file at FL-bench/out/${algo} . |
This benchmark only support algorithms to solve image classification task for now.
Regular Image Datasets
-
MNIST (1 x 28 x 28, 10 classes)
-
CIFAR-10/100 (3 x 32 x 32, 10/100 classes)
-
EMNIST (1 x 28 x 28, 62 classes)
-
FashionMNIST (1 x 28 x 28, 10 classes)
-
FEMNIST (1 x 28 x 28, 62 classes)
-
CelebA (3 x 218 x 178, 2 classes)
-
SVHN (3 x 32 x 32, 10 classes)
-
USPS (1 x 16 x 16, 10 classes)
-
Tiny-ImageNet-200 (3 x 64 x 64, 200 classes)
-
CINIC-10 (3 x 32 x 32, 10 classes)
-
DomainNet (3 x ? x ?, 345 classes)
- Go check
data/README.md
for the full process guideline 🧾.
- Go check
Medical Image Datasets
-
COVID-19 (3 x 244 x 224, 4 classes)
-
Organ-S/A/CMNIST (1 x 28 x 28, 11 classes)
This benchmark does not contain the feature/method you interested? Describe them in here. I can't guarantee your request will be accomplised in time or even considered. So feel free to do it! 💡
Some reproductions in this benchmark are referred to https://github.com/TsingZ0/PFL-Non-IID, which is a great FL benchmark. 👍
This benchmark is still young, which means I will update it frequently and unpredictably. Therefore, periodically fetching the latest code is recommended. 🤖
If this benchmark is helpful to your research, it's my pleasure. 😏