# Continual Learning
This is a PyTorch implementation of the continual learning experiments described in the following papers:
* Three scenarios for continual learning ([link](https://arxiv.org/abs/1904.07734))
* Generative replay with feedback connections as a general strategy
for continual learning ([link](https://arxiv.org/abs/1809.10635))
## Requirements
The current version of the code has been tested with:
* `pytorch 1.1.0`
* `torchvision 0.2.2`
## Running the experiments
Individual experiments can be run with `main.py`. Main options are:
- `--experiment`: which task protocol? (`splitMNIST`|`permMNIST`)
- `--scenario`: according to which scenario? (`task`|`domain`|`class`)
- `--tasks`: how many tasks?
To run specific methods, use the following:
- Context-dependent-Gating (XdG): `./main.py --xdg=0.8`
- Elastic Weight Consolidation (EWC): `./main.py --ewc --lambda=5000`
- Online EWC: `./main.py --ewc --online --lambda=5000 --gamma=1`
- Synaptic Intelligence (SI): `./main.py --si --c=0.1`
- Learning without Forgetting (LwF): `./main.py --replay=current --distill`
- Generative Replay (GR): `./main.py --replay=generative`
- GR with distillation: `./main.py --replay=generative --distill`
- Replay-trough-Feedback (RtF): `./main.py --replay=generative --distill --feedback`
- Experience Replay (ER): `./main.py --replay=exemplars --budget=2000`
- Averaged Gradient Episodic Memory (A-GEM): `./main.py --replay=exemplars --agem --budget=2000`
- iCaRL: `./main.py --icarl --budget=2000`
For information on further options: `./main.py -h`.
The code in this repository only supports MNIST-based experiments. An extension to more challenging problems (e.g., with
natural images as inputs) can be found here: <https://github.com/GMvandeVen/brain-inspired-replay>.
## Running comparisons from the papers
#### "Three CL scenarios"-paper
[This paper](https://arxiv.org/abs/1904.07734) describes three scenarios for continual learning (Task-IL, Domain-IL &
Class-IL) and provides an extensive comparion of recently proposed continual learning methods. It uses the permuted and
split MNIST task protocols, with both performed according to all three scenarios.
A comparison of all methods included in this paper can be run with `compare_all.py` (this script includes extra
methods and reports additional metrics compared to the paper). The comparison in Appendix B can be run with
`compare_taskID.py`, and Figure C.1 can be recreated with `compare_replay.py`.
#### "Replay-through-Feedback"-paper
The three continual learning scenarios were actually first identified in [this paper](https://arxiv.org/abs/1809.10635),
after which this paper introduces the Replay-through-Feedback framework as a more efficent implementation of generative
replay.
A comparison of all methods included in this paper can be run with
`compare_time.py`. This includes a comparison of the time these methods take to train (Figures 4 and 5).
Note that the results reported in this paper were obtained with
[this earlier version](https://github.com/GMvandeVen/continual-learning/tree/9c0ca78f43c29594b376ca59516031fcdaa5d7ba)
of the code.
## On-the-fly plots during training
With this code it is possible to track progress during training with on-the-fly plots. This feature requires `visdom`,
which can be installed as follows:
```bash
pip install visdom
```
Before running the experiments, the visdom server should be started from the command line:
```bash
python -m visdom.server
```
The visdom server is now alive and can be accessed at `http://localhost:8097` in your browser (the plots will appear
there). The flag `--visdom` should then be added when calling `./main.py` to run the experiments with on-the-fly plots.
For more information on `visdom` see <https://github.com/facebookresearch/visdom>.
### Citation
Please consider citing our papers if you use this code in your research:
```
@article{vandeven2019three,
title={Three scenarios for continual learning},
author={van de Ven, Gido M and Tolias, Andreas S},
journal={arXiv preprint arXiv:1904.07734},
year={2019}
}
@article{vandeven2018generative,
title={Generative replay with feedback connections as a general strategy for continual learning},
author={van de Ven, Gido M and Tolias, Andreas S},
journal={arXiv preprint arXiv:1809.10635},
year={2018}
}
```
### Acknowledgments
The research projects from which this code originated have been supported by an IBRO-ISN Research Fellowship, by the
Lifelong Learning Machines (L2M) program of the Defence Advanced Research Projects Agency (DARPA) via contract number
HR0011-18-2-0025 and by the Intelligence Advanced Research Projects Activity (IARPA) via Department of
Interior/Interior Business Center (DoI/IBC) contract number D16PC00003. Disclaimer: views and conclusions
contained herein are those of the authors and should not be interpreted as necessarily representing the official
policies or endorsements, either expressed or implied, of DARPA, IARPA, DoI/IBC, or the U.S. Government.
没有合适的资源?快使用搜索试试~ 我知道了~
持续学习:PyTorch实施各种持续学习方法(XdG,EWC,在线EWC,SI,LwF,GR,GR +蒸馏,RtF,ER,A-G...
共24个文件
py:21个
license:1个
md:1个
4星 · 超过85%的资源 需积分: 50 52 下载量 181 浏览量
2021-02-05
00:17:47
上传
评论 12
收藏 68KB ZIP 举报
温馨提示
持续学习 这是以下论文中描述的持续学习实验的PyTorch实现: 三种持续学习的方案() 具有反馈连接的生成性重放是持续学习的通用策略() 要求 当前版本的代码已经过测试: pytorch 1.1.0 torchvision 0.2.2 运行实验 可以使用main.py运行单个实验。 主要选项有: --experiment :哪个任务协议? ( splitMNIST | permMNIST ) --scenario :根据哪种情况? ( task | domain | class ) --tasks :多少个任务? 要运行特定方法,请使用以下命令: 上下文相关门(XdG):
资源详情
资源评论
资源推荐
收起资源包目录
continual-learning-master.zip (24个子文件)
continual-learning-master
compare_taskID.py 13KB
.gitignore 46B
vae_models.py 16KB
continual_learner.py 10KB
compare_time.py 13KB
param_stamp.py 6KB
visual_plt.py 10KB
LICENSE 1KB
exemplars.py 7KB
utils.py 8KB
compare_replay.py 13KB
README.md 5KB
compare_all.py 23KB
evaluate.py 16KB
excitability_modules.py 4KB
main.py 34KB
visual_visdom.py 2KB
callbacks.py 9KB
replayer.py 1KB
encoder.py 13KB
data.py 10KB
linear_nets.py 8KB
param_values.py 3KB
train.py 18KB
共 24 条
- 1
粢范团
- 粉丝: 31
- 资源: 4697
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
评论6