This is the source code and the camera ready version (in PDF) for our IJCNN 2023 paper “Deep Reinforcement Learning based Multi-task Automated Channel Pruning for DNNs”.
> Model compression is a key technique that enables deploying Deep Neural Networks (DNNs) on Internet-of-Things (IoT) devices with constrained computing resources and limited power budgets. Channel pruning has become one of the representative compression approaches, but how to determine the compression ratio for different layers of a model still remains as a challenging task. Current automated pruning solutions address this issue by searching for an optimal strategy according to the target compression ratio. Nevertheless, when given a series of tasks with multiple compression ratios and different training datasets, these approaches have to carry out the pruning process repeatedly, which is inefficient and time-consuming. In this paper, we propose a Multi-Task Automated Channel Pruning (MTACP) framework, which can simultaneously generate a number of feasible compressed models satisfying different task demands for a target DNN model. To learn MTACP, the layer-by-layer multi-task channel pruning process is transformed into a Markov Decision Process (MDP), which seeks to solve a series of decisionmaking problems. Based on this MDP, we propose an actorcritic-based multi-task Reinforcement Learning (RL) algorithm to learn the optimal policy, working based on the IMPortance weighted Actor-Learner Architectures (IMPALA). IMPALA is known as a distributed RL architecture, in which the learner can learn from a set of actors that continuously generate trajectories of experience in their own environments. Extensive experiments on CIFAR10/100 and FLOWER102 datasets for MTACP demonstrate its unique capability for multi-task settings, as well as its superior performance over state-of-the-art solutions.
You can find the paper from this repo [link](https://github.com/fangvv/MTACP/blob/main/MTACP%20Paper%20Camera%20Ready.pdf) or [ieeexplore](https://doi.org/10.1109/IJCNN54540.2023.10191092).
## Citation
@inproceedings{ma2023deep,
title={Deep Reinforcement Learning Based Multi-Task Automated Channel Pruning for DNNs},
author={Ma, Xiaodong and Fang, Weiwei},
booktitle={2023 International Joint Conference on Neural Networks (IJCNN)},
pages={1--9},
year={2023},
organization={IEEE}
}
# autoPrune
automatic model channel pruning using distributed reinforcement learning
```sh
git clone https://gitee.com/rjgcmxd/multi-task-automated-channel-pruning-mtacp.git
```
```sh
cd multi-task-automated-channel-pruning-mtacp
```
#### 用 conda_requirement 创建新环境
```sh
conda create --name <your env name> --file conda_requirements.txt
```
```sh
conda activate <your env name>
```
#### 在新环境中安装 packages
```sh
pip install -r pip_requirement.txt
```
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
Deep Reinforcement Learning based Multi-task Automated Channel Pruning for DNNs - - 不懂运行,下载完可以私聊问,可远程教学 该资源内项目源码是个人的毕设,代码都测试ok,都是运行成功后才上传资源,答辩评审平均分达到96分,放心下载使用! <项目介绍> 1、该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的,请放心下载使用! 2、本项目适合计算机相关专业(如计科、人工智能、通信工程、自动化、电子信息等)的在校学生、老师或者企业员工下载学习,也适合小白学习进阶,当然也可作为毕设项目、课程设计、作业、项目初期立项演示等。 3、如果基础还行,也可在此代码基础上进行修改,以实现其他功能,也可用于毕设、课设、作业等。 下载后请首先打开README.md文件(如有),仅供学习参考, 切勿用于商业用途。 --------
资源推荐
资源详情
资源评论
收起资源包目录
MTACP-main.zip (65个子文件)
MTACP-main
utils.py 8KB
lib
utils.py 8KB
agent.py 8KB
memory.py 10KB
data.py 35KB
thop
utils.py 681B
__init__.py 121B
vision
__init__.py 0B
efficientnet.py 220B
basic_hooks.py 4KB
rnn_hooks.py 6KB
profile.py 7KB
test.py 197B
net_measure.py 4KB
requirements_atari.txt 1002B
eval_mobilenet.py 4KB
cacp_search.py 10KB
impala_auto_pruning.py 30KB
workspace.code-workspace 178B
env
rewards.py 646B
channel_pruning_env_vgg16.py 35KB
channel_pruning_env_mobilenet.py 33KB
rewards_mxd.py 581B
conda_requirements.txt 1KB
atari_wrappers.py 11KB
core
file_writer.py 7KB
environment.py 2KB
vtrace.py 5KB
prof.py 2KB
clean_log.sh 1KB
amc_fine_tune.py 8KB
models
vgg_cifar.py 7KB
mobilenet_v2.py 4KB
resnet.py 4KB
mobilenet.py 5KB
pip_requirement.txt 224B
MTACP Paper Camera Ready.pdf 1.09MB
.gitignore 57B
amc_search.py 15KB
README.md 3KB
config
auto_prune_impala_cifar100_people.json 538B
auto_prune_impala_flower5.json 522B
auto_prune_impala_cifar100.json 521B
auto_prune_impala_cub200.json 521B
auto_prune_impala_MNIST.json 513B
auto_prune_impala_flower123.json 516B
auto_prune_impala_caltech101.json 527B
auto_prune_impala_flower.json 525B
auto_prune_impala_cifar100_household_furniture.json 564B
auto_prune_impala_cifar100_fish.json 534B
auto_prune_impala_cifar100_insects.json 540B
auto_prune_impala_fashionMNIST.json 534B
auto_prune_impala_flower2.json 518B
auto_prune_impala_cifar100_food_containers.json 520B
auto_prune_impala_flower1.json 518B
auto_prune_impala_cifar100_trees.json 536B
auto_prune_impala_cifar100_fruit_and_vegetables.json 566B
auto_prune_impala_cifar10.json 520B
auto_prune_impala_cifar100_flowers.json 540B
auto_prune_impala_flower3.json 518B
auto_prune_impala_cifar100_reptiles.json 542B
auto_prune_impala_cifar100_large_carnivores.json 558B
scripts
export_mobilenet_0.5flops.sh 794B
search_mobilenet_0.5flops.sh 580B
finetune_mobilenet_0.5flops.sh 343B
共 65 条
- 1
资源评论
- m0_613871182024-04-06你好,我想问个问题,就是强化学习训练结束的状态如何设置成下一个任务训练的新环境
机智的程序员zero
- 粉丝: 2165
- 资源: 4251
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功