# Federated-Learning (PyTorch)
Implementation of the vanilla federated learning paper : [Communication-Efficient Learning of Deep Networks from Decentralized Data](https://arxiv.org/abs/1602.05629).
Experiments are produced on MNIST, Fashion MNIST and CIFAR10 (both IID and non-IID). In case of non-IID, the data amongst the users can be split equally or unequally.
Since the purpose of these experiments are to illustrate the effectiveness of the federated learning paradigm, only simple models such as MLP and CNN are used.
## Requirments
Install all the packages from requirments.txt
* Python3
* Pytorch
* Torchvision
## Data
* Download train and test datasets manually or they will be automatically downloaded from torchvision datasets.
* Experiments are run on Mnist, Fashion Mnist and Cifar.
* To use your own dataset: Move your dataset to data directory and write a wrapper on pytorch dataset class.
## Running the experiments
The baseline experiment trains the model in the conventional way.
* To run the baseline experiment with MNIST on MLP using CPU:
```
python src/baseline_main.py --model=mlp --dataset=mnist --epochs=10
```
* Or to run it on GPU (eg: if gpu:0 is available):
```
python src/baseline_main.py --model=mlp --dataset=mnist --gpu=0 --epochs=10
```
-----
Federated experiment involves training a global model using many local models.
* To run the federated experiment with CIFAR on CNN (IID):
```
python src/federated_main.py --model=cnn --dataset=cifar --gpu=0 --iid=1 --epochs=10
```
* To run the same experiment under non-IID condition:
```
python src/federated_main.py --model=cnn --dataset=cifar --gpu=0 --iid=0 --epochs=10
```
You can change the default values of other parameters to simulate different conditions. Refer to the options section.
## Options
The default values for various paramters parsed to the experiment are given in ```options.py```. Details are given some of those parameters:
* ```--dataset:``` Default: 'mnist'. Options: 'mnist', 'fmnist', 'cifar'
* ```--model:``` Default: 'mlp'. Options: 'mlp', 'cnn'
* ```--gpu:``` Default: None (runs on CPU). Can also be set to the specific gpu id.
* ```--epochs:``` Number of rounds of training.
* ```--lr:``` Learning rate set to 0.01 by default.
* ```--verbose:``` Detailed log outputs. Activated by default, set to 0 to deactivate.
* ```--seed:``` Random Seed. Default set to 1.
#### Federated Parameters
* ```--iid:``` Distribution of data amongst users. Default set to IID. Set to 0 for non-IID.
* ```--num_users:```Number of users. Default is 100.
* ```--frac:``` Fraction of users to be used for federated updates. Default is 0.1.
* ```--local_ep:``` Number of local training epochs in each user. Default is 10.
* ```--local_bs:``` Batch size of local updates in each user. Default is 10.
* ```--unequal:``` Used in non-iid setting. Option to split the data amongst users equally or unequally. Default set to 0 for equal splits. Set to 1 for unequal splits.
## Results on MNIST
#### Baseline Experiment:
The experiment involves training a single model in the conventional way.
Parameters: <br />
* ```Optimizer:``` : SGD
* ```Learning Rate:``` 0.01
```Table 1:``` Test accuracy after training for 10 epochs:
| Model | Test Acc |
| ----- | ----- |
| MLP | 92.71% |
| CNN | 98.42% |
----
#### Federated Experiment:
The experiment involves training a global model in the federated setting.
Federated parameters (default values):
* ```Fraction of users (C)```: 0.1
* ```Local Batch size (B)```: 10
* ```Local Epochs (E)```: 10
* ```Optimizer ```: SGD
* ```Learning Rate ```: 0.01 <br />
```Table 2:``` Test accuracy after training for 10 global epochs with:
| Model | IID | Non-IID (equal)|
| ----- | ----- |---- |
| MLP | 88.38% | 73.49% |
| CNN | 97.28% | 75.94% |
## Further Readings
### Papers:
* [Federated Learning: Challenges, Methods, and Future Directions](https://arxiv.org/abs/1908.07873)
* [Communication-Efficient Learning of Deep Networks from Decentralized Data](https://arxiv.org/abs/1602.05629)
* [Deep Learning with Differential Privacy](https://arxiv.org/abs/1607.00133)
### Blog Posts:
* [CMU MLD Blog Post: Federated Learning: Challenges, Methods, and Future Directions](https://blog.ml.cmu.edu/2019/11/12/federated-learning-challenges-methods-and-future-directions/)
* [Leaf: A Benchmark for Federated Settings (CMU)](https://leaf.cmu.edu/)
* [TensorFlow Federated](https://www.tensorflow.org/federated)
* [Google AI Blog Post](https://ai.googleblog.com/2017/04/federated-learning-collaborative.html)
没有合适的资源?快使用搜索试试~ 我知道了~
Federated-Learning-PyTorch:实施基于分散数据的深度网络通信高效学习
共19个文件
py:7个
gitkeep:4个
md:2个
需积分: 50 12 下载量 135 浏览量
2021-05-06
10:40:25
上传
评论 2
收藏 66KB ZIP 举报
温馨提示
联合学习(PyTorch) 香草联合学习论文的实施:。 在MNIST,Fashion MNIST和CIFAR10(IID和非IID)上进行了实验。 在非IID的情况下,用户之间的数据可以相等或不相等地分割。 由于这些实验的目的是说明联合学习范例的有效性,因此仅使用诸如MLP和CNN的简单模型。 要求 从requirments.txt安装所有软件包 Python3 火炬 火炬视觉 数据 手动下载训练和测试数据集,否则它们将自动从Torchvision数据集下载。 实验在Mnist,Fashion Mnist和Cifar上进行。 要使用自己的数据集,请执行以下操作:将数据集移动到数据目录,并在pytorch数据集类上编写包装器。 运行实验 基线实验以常规方式训练模型。 要使用CPU在MLP上对MNIST运行基线实验,请执行以下操作: python src/baseline_ma
资源推荐
资源详情
资源评论
收起资源包目录
Federated-Learning-PyTorch-master.zip (19个子文件)
Federated-Learning-PyTorch-master
save
fed_mnist_cnn_50_C0.1_iid1_loss.png 24KB
fed_mnist_cnn_50_C0.1_iid1_acc.png 32KB
objects
.DS_Store 6KB
.gitkeep 0B
src
federated_main.py 5KB
sampling.py 7KB
options.py 3KB
update.py 5KB
utils.py 3KB
models.py 4KB
baseline_main.py 3KB
LICENSE 1KB
README.md 5KB
data
cifar
.gitkeep 0B
README.md 127B
mnist
.gitkeep 0B
fashion_mnist
.gitkeep 0B
.gitignore 10B
requirments.txt 92B
共 19 条
- 1
资源评论
洋林
- 粉丝: 37
- 资源: 4574
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- (源码)基于C语言的操作系统实验项目.zip
- (源码)基于C++的分布式设备配置文件管理系统.zip
- (源码)基于ESP8266和Arduino的HomeMatic水表读数系统.zip
- (源码)基于Django和OpenCV的智能车视频处理系统.zip
- (源码)基于ESP8266的WebDAV服务器与3D打印机管理系统.zip
- (源码)基于Nio实现的Mycat 2.0数据库代理系统.zip
- (源码)基于Java的高校学生就业管理系统.zip
- (源码)基于Spring Boot框架的博客系统.zip
- (源码)基于Spring Boot框架的博客管理系统.zip
- (源码)基于ESP8266和Blynk的IR设备控制系统.zip
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功